I'll be a bull hug, you'll be at me (AI) vs. I see a blue horse looking at me (KI)
LLM / AI has difficulty understanding kids.
When I heard that on my podcast, I immediately had to laugh out loud, because that's like the most relatable thing so far that I know of LLMs. Who on earth understand kids, besides maybe, and just maybe their parents?
It's a learning process.
It is for all of us. It makes sense that LLMs have difficulty understanding kids, as they are trained on and by adults, so they inherit the same in-capacities that we adults have. Of which one is understanding the overwhelmingly creative construction of new words, which is what kids do while learning the language.
But kids learn from adults, too.
Yeah, that's a good point there. All intelligence is as good as their training data - if you want to be a parent or are, please repeat that sentence slowly. It's very, very meaningful. It's like a soft Turing Test for parents.
So, if LLMs which are supposedly self-learning softwares do learn from adults, why do they learn so differently? What would happen if we'd program them in a way that makes them learn like kids? A KI, Kids Intelligence?
Thank the frequency that they wouldn't need potty training.
It's the kind of data, actually. Kids data is senses, LLMs is binary system. My guess there, I'm not geek enough to know for sure.
But back to topic.
The fact that LLMs do not understand kids is a problem for the world. Because one of the first things that occured to many politicians was:
Awesome, we can create artificial teachers!
Yeah, that's the solution. It's never making the most important job in any society more attractive, it's replacing it with machines. Makes sense, because the system wants workers, not emotionally capable and holistically intelligent people. That has worked so well for us until now. We so deserve to be in the spot we're in...
But what if the teacher doesn't understand the kids?
Oh. Hm. Yeah. That might not be good. I mean, most teachers do a great job keeping up to speed with all those neologisms that especially teenagers come up with on daily basis. So, is the salvation of capita... sorry, society postponed?
Feed them with kids!
Okay, their voices. That would help, wouldn't it? Have them learn from the kids themselves? But speech samples from kids are hard to get to, as they and their private data are -for good reason- very protected. And yes, voice samples are private.
Rejoice! For there was more code!
When we speak, the program en-codes our speech into machine language (binary). Unfortunately it en-codes everything, not only the content, but also the parts that are not important to understanding the words spoken, like tone and clarity. The de-coder brings it then back from machine into human text, including errors produces by the overflow data. The solution was to just put in another software between Encoding and Decoding, a filter that takes out most of the "noise" and therefor simplifies the Decoder's task, leading to less mistakes. In lab conditions it came down from 57% to 11%.
11% is better than me...
And Lily is almost seven years old. Still, I don't understand everything she says. Might be the obnoxious mix of Spanish and German - me insisting on only speaking German to her is biting me back now. Might be her sporadic excursions into fantasy lands. Might be me being busy and not listening closely. 11% mistakes is impressive. If you don't believe me, go outside and try to have a conversation with a kid (make sure it's not a random kid in the park, jail time is no fun), then you'll understand because you won't understand.
Anyhow, our education system is saved.
Glorious days are to come to our society, with LLM-supported classes and kinder gardens. Oh, did I mention that the other brilliant politician idea mentioned was to use LLMs in pediatric psychiatry?
Doesn't that sound like a lovely idea?
At least the scientists that developed the filter-software have a better idea. That's from the Northwestern University in Illinois:
To use it in order to detect speech disorders early on.
Why are scientists such bad politicians?
Maybe because they're more honest. Now, the inevitable question:
Are YOU good training data?
Additional info:
Podcast: First part of "Forschung Aktuell 21.05.2025", Deutschlandfunk, Arndt Reuning. Part presented by Frank Grotelüschen. Link to the part: Click here! Beware, it's in German.
And it is this...
Kids learn from AI too. We are increasingly pushing them into environments that are controlled, that are programmed, that don't have the randomisation of life that they require to enrich their lives and become robust individuals and members of society. As a result, they end up being triggered by every situation that is unfamiliar - and life is full of new experiences....
Yes, I thought the same. Mechanic education leads to machines, and kills all the bonds that we have with our children, all the human part of us. It shows how much children are seen just as future human resources, not as integral beings anymore. It's also a reflection of where we are currently - real, deep human connections are not wanted anymore, they're seen as inefficient. Communities are not wanted anymore, it's time spent away from work. The interesting part is that those people who have a balanced live (and I don't mean superficially, but really balanced) are usually way more productive. At least I think I heard that study in the same podcast once. And it's also my experience - In the little time I have to work on the bakery, I get a lot more done than the rest. There is pressure, yes, but there's also the joy of being in the spot that I want to be in. Coming home, cook, pick up Lily, do homework, meet with friends on the weekend - all that makes me lighter, and I can think easier.
Education hasn't been valued for a very long time, unfortunately. It's become simply training for a job - that's why the humanities, literature, history, have been progressively eliminated from curricula. They're not "practical." It's not "practical" to teach young humans to grow into adult humans. As horrific as it is,
I think the desire to use LLMs to teach is just taking that one final step further. No human interaction at all, no risk of the connections with great teachers that change people's lives - just treating children as blank hard drives into which we can download information. It's scary. And the thought of using LLMs as psychiatrists is scarier still.
However, there's one ray of hope. These things don't work as well as the hype would have us believe. I have a little hope that some people are waking up and recognizing the limitations of LLMs.
Though I'm not, by nature or training, an optimist.