A researcher in America has posed a theory that children distrust technology because it is presented in an unfamiliar format to them:
Danovitch’s theory as to why kids behave this way is that the idea of voice assistants—and by extension, the internet—is amorphous and hard to grasp. If you’re a child who thinks there’s a tiny woman who lives in the kitchen called Alexa (as Danovitch says her son did), you’re trying to wrap your head not only around how this thing works but what its knowledge base is in the first place. Trusting another person, on the other hand, is hardwired into our brains.
It seems obvious that other countries, cultures, societies would help illuminate human reasons for distrust of technology. I wonder why only American children were studied. A call-out to “her son did” suggests the researcher may be operating with an extremely narrow scale of inward-looking human observation, rather than outward to help others in far more diverse situations.
Also the conclusion begs the question of how early an introduction to technology would achieve “hardwired” status for this researcher.
I’m skeptical of that hardwired theory. Moreover I’m skeptical that the issues observed are about trust that comes from familiarity as much as it is about social training in playfulness and boundaries of authority. Look at how the same researcher describes a teacher as a counterexample to technology:
Turns out kids overwhelmingly trust a teacher—even if the teacher is wrong. That makes sense: they know their teacher, and that teacher has developed a strong relationship with them. But the kids preferred their peers over the internet, too, even though they knew their friends had roughly the same amount of knowledge as they did.
A teacher typically gets presented to the children as someone who will penalize them if there is distrust.
In other words the cost of distrust in a teacher is a bad grade or even detention. The cost of distrust in peers is exclusion or social exodus. What’s the cost of disagreeing with a toy? Knowing something, as in being familiar, probably isn’t the trigger of trust as much as knowing the boundary of authorization to play and push back.
Another way of looking at this is children reflect active trends in society earlier than adults may recognize them. We of course see this in terms of clothing and music.
So if adults are giving off signals that technology is not to be trusted (as we should, for example because Amazon has serious and repeated ethics failures) then children will take that position much more readily (as they should, for example because they don’t have a history of trust to get in their way).
Studies do show that globally trust is declining in American technology companies because they get caught harming society.
The Trust Barometer survey revealed 61 percent of people in developed markets, believe technology companies have too much power to determine what news and information we see, and only 39 percent of respondents in developed markets believe tech is putting the welfare of its customers ahead of profits.
The researcher should be asking children whether they think the technology is working in that child’s best interest, or in the best interest of all children. And then ask them if they think there is any penalty for distrusting or disagreeing with the technology.