Raising children on the eve of AI

I think of us in some kind of twilight world as transformative AI looks more likely: things are about to change, and I don’t know if it’s about to get a lot darker or a lot brighter.

Increasingly this makes me wonder how I should be raising my kids differently.

Why I’m thinking about this

I’m somewhat used to thinking of this in terms of “doom / not doom” and less used to thinking in terms of “what kind of transformation?” 

One thing that got me thinking beyond that binary was historian Ian Morris on Whether deep history says we’re heading for an intelligence explosion, specifically why we should expect the future to be wild.

Another was this interview with Ben Garfinkel of the Centre for the Governance of AI, starting with reasons you might think AI will change things a lot:

  • If you think it’ll be comparable to the industrial revolution, that sure altered people’s work and personal lives a lot
  • Maybe enough work will be automated that people won’t really have jobs
  • AI could exacerbate and destabilize political conflicts, so we might see more political chaos and/or war
  • Really powerful, capable AI systems could behave unexpectedly or go wrong in lots of ways

What might the world look like

Most of my imaginings about my children’s lives have them in pretty normal futures, where they go to college and have jobs and do normal human stuff, but with better phones.

It’s hard for me to imagine the other versions:

  • A lot of us are killed or incapacitated by AI
  • More war, pandemics, and general chaos
  • Post-scarcity utopia, possibly with people living as uploads rather than in bodies that get sick and die
  • Some other weird outcome I haven’t imagined

Even in the world where change is slower, more like the speed of the industrial revolution, I feel a bit like we’re preparing children to be good blacksmiths or shoemakers in 1750 when the factory is coming. The families around us are still very much focused on the track of do well in school > get into a good college > have a career > have a nice life. It seems really likely that chain will change a lot sometime in my children’s lifetimes.

When?

Of course it would have been premature in 1750 to not teach your child blacksmithing or shoemaking, because the factory and the steam engine  took a while to replace older forms of work. And history is full of millenialist groups who wrongly believed the world was about to end or radically change.

I don’t want to be a crackpot who fails to prepare my children for the fairly normal future ahead of them because I wrongly believe something weird is about to happen. I may be entirely wrong, or I may be wrong about the timing.

Is it even ok to have kids?

Is it fair to the kids?

This question has been asked many times by people contemplating awful things in the world. My friend’s parents asked their priest if it was ok to have a child in the 1980s given the risk of nuclear war. Fortunately for my friend, the priest said yes.

I find this very unintuitive, but I think the logic goes: it wouldn’t be fair to create lives that will be cut short and never reach their potential. To me it feels pretty clear that if someone will have a reasonably happy life, it’s better for them to live and have their life cut short than to never be born. When we asked them about this, our older kids said they’re glad to be alive even if humans don’t last much longer.

I’m not sure about babies, but to me it seems that by age 1 or so, most kids are having a pretty good time overall. There’s not good data on children’s happiness, maybe because it’s hard to know how meaningful their answers are. But there sure seems to be a U-shaped curve that children are on one end of. This indicates to me that even if my children only get another 5 or 10 or 20 years, that’s still very worthwhile for them.

This is all assuming that the worst case is death rather than some kind of dystopia or torture scenario. Maybe unsurprisingly, I haven’t properly thought through the population ethics there. I find that very difficult to think about, and if you’re on the fence you should think more about it.

What about the effects on your work?

If you’re considering whether to have children, and you think your work can make a difference to what kind of outcomes we see from AI, that’s a different question. Some approaches that both seem valid to me:

  • “I’m allowed to make significant personal decisions how I want, even if it decreases my focus on work”
  • “I care more about this work going as well as it can than I do about fulfillment in my personal life”

There are some theories about how parenting will make you more productive or motivated, which I don’t really buy (especially for mothers). I do buy that it would be corrosive for a field to have a norm that foregoing children is a signal of being a Dedicated, High-Impact Person.

One option seems to be “spend a lot of money on childcare,” which still seems positive for the kids compared with not existing. 

In the meantime

Our kids do normal things like school. Partly because even in a world where it became clear that school isn’t useful, our pandemic experience makes me think they would not be happier if we somehow pulled them out.

I’m trying to lean toward more grasshopper, less ant. Live like life might be short. More travel even when it means missing school, more hugs, more things that are fun for them. 

We got kittens.

What skills or mindsets will be helpful?

It feels like in a lot of possible scenarios, nothing we could do to prepare the kids will particularly matter. Or what turns out to be helpful is so weird we can’t predict it well. So we’re just thinking about this for the possible futures where some skills matter, and we can predict them to some degree.

I haven’t really looked into what careers are less automatable; that seems probably worth looking at when teenagers or young adults are moving toward careers. I wouldn’t be surprised if childcare is actually one of the most human-specialized jobs at some point.

Some thoughts from other parents:

  • A friend pointed out is that it’s good if children’s self-image isn’t too built around the idea of a career, because of the high chance that careers as we know them won’t be a thing.
  • “For now I basically just want her to be happy and healthy and curious and learn things.”
  • “I think it’s worth focusing on fundamental characteristics for a good life: high self esteem and optimistic outlook towards life, problem solving and creative thinking, high emotional intelligence, hobbies/sports/activities that they truly enjoy, being AI- and tech-native.”
  • “I’m less worried about mine being doctors or engineers. I feel more confident they should just pursue their passions.”

How much contact with AI?

I know some parents who are encouraging kids to play around with generative AI, with the idea that being “AI-native” will help them be better prepared for the future. 

Currently my guess is that the risk of the kids falling into some weird headspace, falling in love with the AI or something, is higher than is worth it. As Joe Carlsmith writes: “If they want, AIs will be cool, cutting, sophisticated, intimidating. They will speak in subtle and expressive human voices. And sufficiently superintelligent ones will know you better than you know yourself – better than any guru, friend, parent, therapist.”

Maybe in a few years it’ll be impossible to keep my children away from this coolest of cool kids. But currently I’m not trying to hasten that.

What we say to them

Not a lot. One of our kids has been interested in the possibility of human extinction at points, starting when she learned about the dinosaurs. (She used to check out the window to see if any asteroids were headed our way.)

We’ve occasionally talked about AI risk, and biorisk a bit more, but the kids don’t really grasp anything worse than the pandemic we just went through. I think they’re more viscerally worried about climate change and the loss of panda habitats, because they’ve heard more about those from sources outside the family.

CS Lewis in 1948

I think this quote doesn’t do justice to “Try hard to avert futures where we all get destroyed,” but I still find it useful.

“If we are all going to be destroyed by an atomic bomb, let that bomb when it comes find us doing sensible and human things—praying, working, teaching, reading, listening to music, bathing the children, playing tennis, chatting to our friends over a pint and a game of darts—not huddled together like frightened sheep and thinking about bombs. They may break our bodies (a microbe can do that) but they need not dominate our minds.”

Related writing

Zvi’s AI: Practical advice for the worried, with section “Does it still make sense to try and have kids?” and thoughts on jobs.

Anna Salamon and Oliver Habryka on whether people who care about existential risk should have children.

  1. Craig C

    Nice article with perspective/questions I hadn’t thought of before. That said, this isn’t the “AI” that people talked about, and was my conception, back in the Asimov Era (50s-80s). Back then, the “I” in AI was emphasized, as in “intelligence”, which also implied agency. What they call AI now, feels more like a what was originally known as data mining, only with a widely-expanded database, and combined with what I will call “impersation algorithms.” Both of those elements of 2020’s AI have their obvious dangers. What I fear about 2020’s AI is that it isn’t really “intelligent” at all — it doesn’t *know* whether the things it knows are truthful or not, in spite of what its “impersonation” algorithm may report. As a result, humans who aren’t very careful or skeptical may place more trust in these AI non-agents than they are deserving of — which I think is the real danger.

    2030’s “AI” will probably be much different that our current “AI” — and maybe someday, such a system may in fact achieve “agency” and/or “awareness”, in which case, watch out. I remember when I was a kid, reading about astronomy and learning, about age 9-10, that someday the sun was going to go red-giant and swallow up the solar system, burning up Earth in the process. This thought has stuck with me all these decades. There is no infinite future for us, and regardless, it makes as much sense worrying about those power-of-ten descendants of ours as, say, Plato worrying about us, his own 2-plus-millenium descendants.

    “Life as we know it” seems to only last 10-15 years, then it’s not that anymore. And so it goes, as Vonnegut said. Thanks for the well-written article and for allowing me to riff on it.

  2. N

    You are right to be asking these questions. There is a lot that can happen, and predicting a fine path forward is impossible for anyone.

    Currently, I think doom is more likely than not for a very simple reason. If you put a long list of different doom scenarios on a conveyor belt, one right after the other with increasing frequency, eventually a mistake is made at some point, and that is it civilization collapses or extinction occurs; and in a nutshell that pretty much sums up the last 50 years or roughly two-three generations of kicking the can.

    Today, there are more irrational people, than rational. People mistake training for intelligence, and around the 1970s was the point of ecological overshoot for humanity. We’ve managed to avoid the consequences of Malthus Law of Population solely because previous generations built systems following rational principles and many of those systems have been torn down, or corrupted for short term gain.

    Resilient systems are flexible systems, but when you look around at various different aspects of our society, our systems are brittle, and they’ve become more progressively brittle and coercive over time violating fundamental documents and rights, people are talking about intolerable acts, and lack of conflict resolution (related to the rule of law not being present anymore). I can understand a lot of the finer details for those arguments, and they are not wrong; but nothing effective is being done to correct these things.

    The world doesn’t change on its own, people change the world, and given that things today are worse than they were for previous generations its clear the people who were entrusted with the responsibility to organize and safeguard have failed in their responsibilities quite dramatically, or in some cases outright violated their oaths through deceit, corruption, coercion, etc.

    Worse, instead of fixing the problem, its been hidden by removing education on subjects that would empower individuals (such as logic and critical thinking, and how coercion/deception/credibility work). It is sorely lacking, and the public school system today is more akin to a Maoist re-education camp promoting socialism/Marxism without calling it what it is (often prior to the age of reason), indoctrination that’s solely designed to encourage unthinking loyal workers; who won’t have jobs thanks to AI.

    The simple mechanics are when no labor is needed, no work means no food, and with a rapidly depleting store of value being carved out to pay for fraud, wage stagnation, and other mistakes, ultimately when reserve currencies fail everything else fails.

    From where I am sitting, we have at least 10 different unrelated doom scenarios that are going to hit in rapid succession, and those are just the solid ones (not considering the feasibility of AGI) of which we’ll have to deal with over the next decade.

    Most of these issues were predicted with time to address them (some as far back as 1930, i.e. Mises wrote all about the structural problems with socialism/syndicalism/marxist policies), but no action was taken and worse interference led to the supression of the science solely to benefit a minority over the majority.

    As a result of these policies, I find its very likely that we’ll see a massive collapse in population.

    The way systems and engines work, interference reaches a point where friction causes seizing and then the dominoes fall. The last 30-40 years have been business concentration cycles, followed by bailouts, followed by a smaller market nearly every time, fueled by cheap debt provided by the primary banks and the Fed. It was disguised as prosperity by QE (which started permanently in 2010 with the abandonment of the sound dollar policy), but the bill always comes due.

    Typically, supply chains fail briefly, shortages occur, they self sustain because policies have been put in place preventing self correction, and economic calculation becomes impossible to project in the short term. When no profit can be had, businesses first produce less, then stop producing. Logistics and Energy fail, then Food, it won’t happen overnight, but the dynamics present build and cascade like a dam or avalanche.

    Utopia’s never happen, and most people fail to understand the rational underpinnings for how things were made to work, and the required credibility/trust needed for things to work. Modern Monetary Theory is in the same category; relying almost entirely on the misuse of statistics.

    Personally, I would have liked to have children despite these uncertain times, but that’s probably not in the cards.
    3rd party interference there too. Communications platforms dropping messages without any notification or indicator (ghosting), dating platforms matching people with others who are not compatible (the business model is they lose a customer when a compatible match is found), and the vast majority of people in the past 20 or so years have no idea how to date, and what’s acceptable/unacceptable, people go on dates because they are interested; and people spending all their time on their phone instead is so very disrespectful.

    Makes it almost impossible to find suitable partners, and the economics of work today even for high end jobs often don’t provide enough money to meet costs for expenses tied in raising children in many locales.

  3. Jamie

    It’s bad to promote not having children, God said to be fruitful and multiply. That child you don’t have could have been the one that solves important issues in the world. Not to mention its fulfilling to raise a child and carry on your family line into the future.
    The current crop of elites that look upon Humanity as a scourge to be eliminated are the ones that need to be “reprogrammed” or removed. The highest rule for AI and robots should be to preserve humanity at all costs. Instead of replacing workers, the goal should be to enhance and expand the jobs available to humans, while throwing the low-paying repetitive work to the robots.

  4. Keven

    In the meantime, I’m going to keep making the most of AI to help raise my kid. It’s one of the best tools to feed curiosity. Paired with the voice interaction, I’ve watched a 6 year old ask questions and go through a dozen follow ups – in plain English, and occasionally asking it to illustrate certain things (Dall-E integrated into ChatGPT). Kick off the discussion by telling ChatGPT is interacting with a 6 year old, and it’ll craft all its’ responses in a manner that a 6 year old understands.

    The Infinite Patience is overlooked and underrated.

Write a Comment

Your email address will not be published. Required fields are marked *