Reuters' not-so-big gamble with AI: for a profession terrified by bots, just wait until hackers et al rig the AI, too. When you are always looking for easy answers, you never asked the right questions.

When journalists talk about themselves, it is alway in superlatives. It is about saving democracy or, in this case, taking big gambles. There is a huge difference between taking a risk and a gamble, it is fitting that Reuters is being seen as taking the latter -- the one where you hope fate saves your hide instead of you making changes to rely on yourself.

The article in question, "Reuters is taking a big gamble on AI-supported journalism: The news agency has announced the launch of Lynx Insight, a major new AI-powered tool that will be used in its newsrooms across the world" shows precisely why the industry not only collapsed, but will stay in their coffins:

Reuters is building an AI tool to help journalists analyse data, suggest story ideas, and even write some sentences, aiming not to replace reporters but instead augment them with a digital data scientist-cum-copywriting assistant.

Called Lynx Insight, it has been trialled by dozens of journalists since the summer, and will now be rolled out across Reuters newsrooms. Reg Chua, executive editor of editorial operations, data and innovation at Reuters, says the aim is to divvy up editorial work into what machines do best (such as chew through data and spot patterns), and what human editorial staff excel at (such as asking questions, judging importance, understanding context and — presumably — drinking excessive amounts of coffee).

This is typical of finding a passive and magical solution to augment people who keep doing the same thing, but expecting a different outcome.

AI will not save journalism. Your lawnmower will not save your personal fortunes. It is just a tool.

When I was an undergraduate studying psychology, I had the good fortune to be able to take a class in AI, and the fun was running actual experiments to test various programs. I got to "train" computers, so to speak, and the best part was there was an entire lab with about 10 computers, and being the keener, I would be one of the few to go there, and had the entire lab for myself for the entire day.

This was something I took full advantage of as I was always one to run experiments that confirmed theories, but also refute them, and I could get awfully creative and as it took hours back then to train a computer, I could run ten experiments at once, and compare and contrast the results simultaneously.

I knew a lot about heuristics. I know about logical fallacies. I knew about optical illusions in art. These were some of the points of interest I could actually test. This was promising as I had the lab all to myself uninterrupted, and could take my time and finely tune what I was doing.

What I discovered was AI was extremely easy to manipulate and fool. Not a single theory stood up to my tests.

AI was patriarchal by design. It actually was not as logical as the press releases from tech companies love to pretend it is. It is, in fact, mired in its own narrative and own confines in logic.

And this made it ripe for some very elegant and stealth manipulations.

It could not "see" multiple interpretations of a single object, for instance. It could not spot certain variables. There were countless other design flaws and limitations -- so much so, that I could make use of those flaws and create a program that would train AI software to be able to avoid spyware by developing a "phobia" to it.

Hit it once with a virus, and it could learn to "avoid" similar viruses in the future.

But it came at the same price as people who had phobias: it could get too touchy -- and then just avoid even good things because of the training. I could "cool down" the phobia by injecting a secondary AI program, just as therapy or medication could for human phobics -- but then again, you can override one AI program with another for nefarious purposes.

Nothing has actually changed in AI since then. I can still muck it up within minutes, even with the same simple methods I mucked up the cruder versions of it.

And I am not an AI programmer.

So imagine a hacker who is. 

If bots on Twitter can make damage, and Facebook problems has journalism all scared, then how reliable is AI in the newsroom? If you become dependent on computers to do your thinking for you -- if something happens, you are useless.

You can be fooled. You can become a vector. If the technology fails, you are hobbled in your job.

Because journalists don't have the right sort of empirical training, AI is not going to plug up those holes, no matter what. That won't fix the problem. It is a systemic problem in the profession that has a broken mindset. AI is not going to alter those fundamental issues.

We see many tech companies over-rely on algorithms and AI, and the backlash against them started because their screw-ups can be traced to building in passivity and dependence on technology in places where it is not the place to have it.

I have written about the limitations of AI before, and this is not the only problem with it.

The more we rely on technology, the worse off journalism has become.

The less we can understand information. The less able people have become in being able to separate a lie from a truth. The more we assume the information is accurate because it has been processed through technology.

Journalism training would require reintroducing journalists to primitivity. No electricity, technology, or even basic comforts. Throw them in the middle of nowhere and make them come in tune with their environment and instincts.

Not build a fortress of soon-to-be-obsolete toys.