How to talk to your family about AI

The UK government has decided to spend a fortune on promoting AI to the masses, and they are not exactly impartial.

If you have family or friends adopting "AI" tools, make sure they do not fully swallow the koolaid. Pro-gen-AI propoganda is going into hyperdrive as the promised return on the huge investments are failing to materialise.

You might have seen the flop sweaty pleading from the over-invested AI bigwigs. They need more people to use their tools more often if we are going to avoid a catastrophic bubble bursting.

Speaking at the World Economic Forum at Davos, Switzerland on Tuesday, Nadella pontificated about what would constitute such a speculative bubble, and said that the long-term success of AI tech hinges on it being used across a broad range of industries - as well as seeing an uptick in adoption in the developing world where it's not as popular, the Financial Times reports. If AI fails, in other words, it's everyone else's fault for not using it.

The thing is, AI up to now has been largely "meh" for most people. Yeah, some people find it so fascinating and essential that they literally want to marry their virtual fren, but study after study has found limited to no productivity boost for general business tasks.

This is a problem when much of the GDP growth of whole nations depends on more data centers, more GPU investment, more hype in the hype cycle.

So we get to the point in the UK where the official line is to shovel more and more bodies into the system, Soylent Green style.

"We will force AI into everything until you start adopting it willingly"

One problem I hear with this new curriculum, largely supplied by Amazon, Google, Microsoft, is they will give ~20 minute talks, up to several hours for the paid courses, about how to register and go into their chosen "partners" (almost all USA based giants so far) and ask questions.

The NHS, the British Chambers of Commerce and the Local Government Association are among those who have committed to encouraging their staff and members to sign up.

The lessons are whole or in part supplied by their tech partners who build the tools. How much will be spent educating on the risks and dangers, I wonder?

Will the merit badges include "Successfully generated revenge pr0n of ex-girlfriend" or will it be restricted to "Clippy writes my emails now"?

As a techy who is expected to know about these things, I have been learning about AI, machine learning and computer vision for a long time.

There are uses for this stuff, excellent uses, but pushing and coercing people into using the tools where they are unhelpful or misleading will cause problems, and not just the horrors of unclothing women and girls.

Grok AI generated about 3m sexualised images in less than two weeks, including 23,000 that appear to depict children, according to researchers who said it "became an industrial-scale machine for the production of sexual abuse material".

The International Monetary Fund (IMF) has warned AI could affect nearly 40% of jobs, and worsen global financial inequality. Critics also highlight the tech's potential to reproduce biased information, or discriminate against some social groups.

The BBC was told in February that government plans to make the UK a "world leader" in AI could put already stretched supplies of drinking water under strain.

Generative AI systems are known for their ability to "hallucinate" and assert falsehoods as fact, even sometimes inventing sources for the inaccurate information.

Apple halted a new AI feature in January after it incorrectly summarised news app notifications.

The BBC complained about the feature after Apple's AI falsely told readers that Luigi Mangione - the man accused of killing UnitedHealthcare CEO Brian Thompson - had shot himself.

Google has also faced criticism over inaccurate answers produced by its AI search overviews.

Thousands of creators - including Abba singer-songwriter Björn Ulvaeus, writers Ian Rankin and Joanne Harris and actress Julianne Moore - signed a statement, external in October 2024 calling AI a "major, unjust threat" to their livelihoods.

AI in schools and workplaces, where it is increasingly used to help summarise texts, write emails or essays and solve bugs in code.

There are worries about students using AI technology to "cheat" on assignments, or employees "smuggling" it into work.



0
0
0.000
6 comments
avatar

It's interesting how they try to gamify things these days to increase adoption. That's something that has never really landed well with me. I know Marky has been posting on X lately about how OpenAI is pretty close to going bankrupt and winrar is actually making more of a profit than them. Working in education, I have seen a mass pull towards AI and adoption. I'm probably one of the few people who wishes it would all slow down. I've been dragging my feet on any implementation because I think there are a lot of policy and ethics questions that need to be addressed. It seems like every professional conference my end users and supervisor go to though are touting AI, so I kind of feel like I am just going to be along for the ride at some point.

0
0
0.000
avatar

I have no interest in using commercially availed AI for anything. FOSS AI released into the wild I can myself train on datasets might be useful, but I'm busy making other tools to produce goods and services, and don't envision using DeepSeek, for example, to draft management software for the tools I am making, in the foreseeable future. Maybe when I get these tools nailed down drafting management software to automate them will be the next step, but that's yet beyond my current abilities.

Thanks!

0
0
0.000
avatar

I suspect we are going to see massive abuse of this technology. The Grok porn-o-bot is the tip of the iceberg. We'll be getting lots of phone calls that sound human as they try to scam us using all sorts of personal data. I've already seen people in my family get scammed by current methods. The genii is out of the bottle.

Of course it has real uses too, but you do have to be aware of the limitations.

0
0
0.000