AI – Back To The Virtual Future

Some of us could see this coming from a long way off. Thirty years ago we warned there might be a few black holes in cyberspace (what we called the Internet in the 1990s). Had we foreseen emerging from this virtual world would be a company of the size and with the reach of Google we would have assumed this, rather than AI software, was the existential threat.

Twenty-eight years ago technologists and philosophers from around the world arrived at Warwick University to attend what, had this not been the dawn of the Internet age, might have been a relative insignificant conference.  Virtual Futures 95 was the antithesis of the annual cheerleading computer conferences. The weather during the four days was hot and humid with grey cloud clamped down on the campus like the lid of a pressure cooker and it constantly felt a storm was about to break. While I sat at a picnic table outside the science block, because technical issues had brought the session on video graphics to a premature end, raindrops spattered the paper I was reading: on cyber fascism, I seem to recall.

There was optimism and pessimism in equal measure. Mackenzie Wark suggested the commercialisation of the Information Super Highway had run out of road while Arthur and Marilouise Kroker believed governments, corporations and the media would not give up on a quest to impose on the virtual world the same structures they used to control the physical one. This was a sentiment echoed by the Manuel de Landa who was followed around the campus by his group of disciples: aspiring philosophers on a weekend break from their day jobs in corporate IT. 

Near elation when it was announced a session on commercialisation of online communication was cancelled because the speaker, John Browning, had quit his job as editor of Wired UK. Surely now the magazine would return to its roots as the cyberpunk’s bible. (The publication staggered on for a few more issues then collapsed a month before a my two-page interview was due to appear.)

Perhaps there were amongst the delegates some of today’s ‘godfathers’ because Artificial Intelligence, was discussed and questions asked about the role of humans in the digital age. There were warnings of temporal disturbance and mankind reduced to little more than surplus flesh. On Sunday the storm broke and I drove back to Cambridge in torrential rain, feeling vindicated because, just maybe, abandoning a successful career and trashing a prosperous IT company had not been in vain. The money was gone but my conscience remained intact.

Five years earlier, using the non de plume Peter Jarman, I authored a novel entitled ‘Three Journey’s into The Labyrinth’ – three short stories based on the same series of events. Not something the CEO and founder of a successful IT company puts into the public domain and, in retrospect, there were probably better ways of describing forebodings about the use of AI and the impact of social media. The book were contained echoes of Dante’s Inferno, a decent into a hell constructed of the conspiracy theories, and reference to a form of madness induced by communicating with people in the darkness of cyberspace. Questions too about what happens if our perception of reality becomes reliant on information devoid of a coherent narrative. But it was a second novel, ‘Fahrenbrink,’ and a series of magazine articles, which proved the career killer.

My company, Digithurst, had developed a browser which stripped Teletext pages out of a TV signal and then presented them on a PC screen as an online newspaper: its release prompted a ‘cease and desist’ letter from Teletext’s lawyers. The software used AI to match the ‘e-newspaper’s’ content to a reader’s interests and their browsing history and I used the contents of ‘Three Journey’s into the Labyrinth’ to test this feature. The software established links between events described within the three stories I was unaware of when I wrote them. This proved something of a personal revelation and produced sufficient material for a second novel. Here was a form of computer aided psychoanalysis made possible by the objectivity of artificial intelligence.

The company already had a list of firsts including hardware and software to display live video in Microsoft windows, the electronic newspaper and a prototype social network using video communication, messaging and language translation. Now added to these was the AI assisted authoring of a novel. While this was ground-breaking in 1990, today anyone journeying into the labyrinth of social media is doing much the same: except their AI authored ‘Fahrenbrink,’ and any insights it contains, remains hidden on a server. This lack of transparency is central to the current debate over the use of generative AI, and Microsoft’s attempt to disrupt Google’s business model by changing the status quo. Everything else; scare stories about computers acquiring God like powers; software deciding to destroy mankind and all those calls for regulation, are a means to an end for two companies attempting to reposition AI to cause the maximum damage to their rival.

My introduction to AI, back when it was referred to as ‘machine intelligence,’ came via a robot called ‘Shakey’ developed by the late Nils Nilsson, author of ‘The Principles of Artificial Intelligence.’ Founding a Digithurst and using techniques similar to those employed by Nilsson, together with algorithms developed by researchers in Switzerland and East Germany, I set about developing software which would enable a desktop robot to identify and pick up various objects. The only true innovation on my part was software which made images compact enough to be stored and manipulated on a home computer (the company was self-financed and the equipment budget was minimal.)

Ai technology moved on and my company was compelled to follow. The shift became apparent when Margaret Boden published the second edition of her ‘Artificial Intelligence and Natural Man.’ This included references to papers by scientists who now regret their role in the development of AI. This new generation of researchers were developing software to analyse 2 dimensional, as opposed to 3 dimensional objects, and search for patterns within numbers and text. One of these researchers, Geoffrey Hinton, co-authored a paper on reverse mode of automatic differentiation, a technique which enabled a computer to simulate the human learning process. While Hinton acknowledges the heavy lifting was done by a colleague given the number of people working in this field it is no surprise some feel credit should also go the machine learning pioneer Paul Werbos came up with the original idea in 1974. Attempting to identify the person responsible for the science underpinning generative AI leads us all the way back to Bertrand Russell and possibly even Gottlob Frege. So ‘godfather’ might be a bit of a stretch, – perhaps Sonny or Fredo might be more apt. Critical, however, is understanding how relatively innocent work on intelligent machines, which had a minimal impact on our everyday lives evolved, largely unnoticed , into something potentially far more disruptive.

This Was All News To Me

In the summer of 1994 I was sat the boardroom of Pearson Plc on the top floor of Millbank Tower, conveying similar doubts regarding AI as were expressed in ‘Three Journeys into The Labyrinth’ to the then owners of the Financial Times and Westminster Press (a local newspaper publishing group.) It was in conjunction with one a Westminster Press’s title, The Wiltshire Gazette and Herald, that Digithurst demonstrated its online newspaper.

At this time the mantra within the newspaper industry was still ‘content is king,’ so it was difficult to explain to publishers that losing control of the medium would result in losing control of the message.

The prototype social network was inspired by a newspaper editor who believed the reader’s letter page was one part of a publication which might work well online. But even this preview of a world in which readers generated their own content failed to convince Pearson. Instead of spending the next two decades transforming their local newspapers into European versions of Facebook for the industry’s coveted younger readers, Westminster Press, like other local newspaper groups, became a cash cow for a succession of private investors. The new owners turned the off the engines to conserve fuel and put the company into a shallow dive: what someone described as a highly lucrative twenty years of decline. In part the media’s current obsession with generative AI can be attributed to journalists fearing this decent is almost over. There is something therapeutic about sharing concerns about the future with a wider audience.

Journalists employed by today’s local newspaper groups, companies such as Reach and Newsquest, operate in a virtual world shaped by Google, Facebook and Twitter. These high-tech companies influence every step in the publishing process, from the original tip and research to crafting content and headlines to ensure the story is prominent in the reader’s social media feed and picked up by aggregators such as Google News. Declarations by editors regarding future use of generative AI ring hollow given the industry’s existing reliance on Google as a journalist tool and principal connection with readers.

The first live video content displayed in Microsoft Windows was an excerpt from Faulty towers: Basil repeatedly hitting Manual, the blows punctuated with, ‘Do you understand.’ In retrospect a boot forever stamping on someone’s face might have been more appropriate. At the presentation of the technology in Reuters’ New York offices applications, beyond displaying CNN on trading terminals, were discussed: there were scowls and looks of disapproval when a marketing manager suggested this would be a ‘great way to watch pornography in the office.’ This was a decade before YouTube although one other one person had spotted a golden opportunity. Robert Maxwell demanded to know who had developed the technology. Fortunately Reuters refused to tell him: sometimes you get lucky and dodge the bullet.

At this time Bill Gates believed the market for Microsoft Windows would increase exponentially if its operating system was used as the platform for a new generations of televisions. As the broadcast, newspaper and computer industries converged on each other few people considered what would happen when all three met. In the event juxtaposing video and text on a single device eroded the linear narrative of the book and newspaper article, replacing it with multiple streams of unstructured data. As compelling and liberating as on demand information was on first viewing, very soon it created the impression here was a world running out of control. Then two graduates at Stamford University came to the rescue of all those unable to cope with rampant acceleration. Don’t worry we have got this under control, we were assured when these two young men set up their Internet search company. Two and half decades on, Google now has almost everything under its control, included us.

A final, somewhat prophetic essay entitled ‘Virtuelle Realität oder virtuelle Aussterben‘ – which, to the dismay of Digithurst’s distributors also appeared in magazines throughout Europe (Virtual Reality or Virtual Extinction) – expressed fears which, apparently, you should not make public until you retire. But then my commitment to the IT industry was far from total.

Even when the Digithurst was turning over £1 million a year it was difficult to regard writing software as ‘work’ and arrived at the office each morning carrying a bag of builder’s tools. Returning from that meeting with Reuters in New York I changed into overalls to supervise the construction of what would become the last and most ambitious building: one that briefly saw two careers merge. The design of the company offices were inspired by two hotels, one modern, the other over a two hundred years old, derelict, abandoned and located in a forest south of Hannover in Germany.

Travelling a great deal, and having an overworked PA, resulted in block bookings with hotel chains. A visit to Sofie Antibes in the south of France was followed by a meeting a day later in Brussels: waking up in a hotel room close to Brussels airport. During the fifteen minutes it took to work out why the minibar had moved from next to the door to under the window I occupied a virtual space called Novotel.

Plans to acquire the hotel near Hannover were frustrated by the son of the owner who had his own plans for the building. He, like myself, felt a vaulted restaurant with a mezzanine, made the building unique.  It was a feature I copied when I designed Digithurst’s new building with that trip into the virtaul world of Novotel in mind. The three stories in ‘Three Journeys into The Labyrinth’ covered the period during which the office was built. When the building was complete a terminal in the reception area displayed images of the hotel’s restaurant and the offices morphing into each other. The one departure fromthe original was a labyrinth glazed onto the tiled floor of the reception.

This was the point when two careers intersected, the building becoming part of an experiment in virtual reality. And there was a feeling of being in a virtual space when standing in either of the buildings and imagining I was in the other. Was this the same as the virtual space participants in our social media experiment occupied when communicating with each other? Either I was wrong, or everyone was missing the point. Well, not everyone, because after a conference on the future of television in the digital age – during which I was rounded on by Michael Grade and accused of being a luddite – someone who assured me I had expressed widely held concerns and wondered why more people were not openly doing this. The answer, of course, was that most people would like to keep their jobs: something that was never uppermost in my mind. Even so turning the page and discovering a full stop when expecting to find a comma came as a surprise.

So there would be no ‘If I had known, I would have been a locksmith (or bricklayer)’ moment. And, anyway, I was never a great believer in the corporate moral imperative. Companies are beholden to shareholders and anyone with a basic grasp of cognition should realise a promise not to do evil is made by someone subconsciously weighing up the advantage of doing the opposite, and this becomes apparent when their company faces an existential threat. Would it have been better not to have parsed that book with AI software? Perhaps. Should our AI generated profiles remain privy to the software that creates them and under the guardianship of Big Tech? Personally, I think the Cambridge Analytica debacle answered that. Did I regret Digithurst closing (the German division still exists, doing far more responsible things with imaging technology)? Perhaps, but in the words of the Pink Floyd song ‘When the band you are in starts playing different tunes …’

The original browser, as advanced as it was at the time, ended its days distributing advertisements to lottery machines and there is a reminder of a past life when collecting my newspaper from the village shop. The building is missed, however the original in Hannover has been renovated and can be visited if a location is needed for a story.

From Here On

The current controversy regarding generative AI demonstrates that issues raised during the 1990s remain unresolved. Understanding the way forward is made harder by the myths and hype emanating from the the high tech industry itself. 

It does seem strange how few stories you read about crypto currencies these days. Which leads one to believe generative AI is the new bitcoin and, as well as inflating a rapidly expanding financial bubble, the technology is breathing life into a collection of zombie startups crippled by a recent rise in interest rates and the collapse of the Silicon Valley Bank along with other venture capital funding laundromats. I guess if your digital respirator failed to gain traction during the Covid epidemic you can now rebrand it as an AI device and hope investors do not question your access to the data on which the AI is reliant.

Claims that AI will destroy 300 million jobs tend to be made by investment banks whose customers get excited when they hear someone has discovered another way to replace labour with capital.

Would Google have released Bard if OpenAI had not made ChatGPT available to its users?

We have been here before. Plato was concerned writing threatened the future development of human intelligence, (‘Knowing What We Know’ by Simon Winchester is worth reading, as is ‘Plato at the Googleplex’ by Rebecca Goldstein). A more relevant point in the history of communication was the sixteenth century and disputes over the translation of the bible from Latin into local languages which masked a broader struggle between the Catholics and Protestants. (For Catholics read Google, Protestants read Microsoft)

The idea that the two forms of intelligence, human and artificial, are locked in some Darwinian struggle for dominance is misguided as their relationship is symbiotic rather than competitive, but nevertheless a mindless embrace of AI it not without certain risks because:-

While some of us grew up with books and consumed information in discrete packages there have been two generations exposed to online communication, principally some form of social media, during various stages of cognitive development. Mental health issues amongst the young, including near epidemic incidents of ADHD, can in part be attributed to a heavy reliance on social media as a communication tool and a primary source of information before the age of sixteen – during the period when a child develops an ability for abstract thought. We should worry less about some far off, futuristic AI powered quantum computer which is conscious and give more thought as to how software running on phones today alters a young person’s conscious state.

In the 1990s several articles I wrote after conducting the online newspaper and social media trials expressed concern regarding information overload resulting from accessing large amounts of data without the tools to process it. In some ways generative AI is a response to this problem and, had it not been developed by OpenAI or Google, at some point something similar would have appeared.

A potential use of generative AI is to process large and complex, but discrete, collections of data. For example, Jimmy Wales, founder of Wikipedia, has suggested the software could provide easier access to information contained within the online encyclopaedia he founded.

Anthropomorphism – a word we are starting to hear a lot. A person shouts across a valley and while fully realising the reply is merely an echo has, momentarily, fixed in their mind the image of someone calling back to them. This cognitive flaw served us well when only two types of homo sapiens roamed the earth; the quick and the dead. Nature favoured those who assumed anything appearing to communicate with them was likely to be another intelligent lifeform, had intent and possibly meant them harm. Today the ‘anything’ extends to desktop PCs and mobile phones and our tendency to believe these devices possess human attributes is amplified by computer scientists using biological sounding terms such as ‘neural networks’ and ‘genetic algorithms.’ We now also have software that ‘hallucinates’ – whatever happened to the good old fashioned database error?

True, AI software may ‘mimic how way our brain works’ but this software runs on a microprocessor made from silicon, plastic and copper not in a human organ consisting of fat, water, protein, carbohydrates and salts. The key word here is ‘mimic’ and not ‘is.’ Unfortunately, scientists tend to view the world through a drinking straw and while being highly focused and incentivised to create a machine with a human like brain is admirable, it does, given they too are susceptible to bouts of anthropomorphism, lower the threshold for evidence suggesting they have succeeded. In a scientific community which now regards peer read research as second best to a five-minute interview on the BBC Today program, this ‘evidence’ is often accepted without question.

It is surprising facial recognition in general, and micro-gestures in particular, have not featured in the godfathers of AI’s long, but redacted, list of threats. Micro-gestures are small and momentary movements of facial muscles. These are detected subliminally and are often the very brief rehearsal of a smile or frown enabling a person to deduce how the recipient is likely to react. Their presence explains why we instinctively trust, or are attracted to, another person. We cannot suppress micro gestures – although they are no longer generated by dementia sufferers – so if your facial recognition software can detect these you have the equivalent of anthropomorphic golden bullet, a computer that passes the Turing test because it appears to know what you are thinking.

Detecting micro-gestures requires a high-resolution, high-speed camera with a resolution far in excess of those at Sainsbury’s self-checkout making sure the bulge in your trousers is not a cucumber. While in 1983 my company’s first digital video system only produced low resolution monochrome images, four years later we could display full broadcast colour video on a PC. So, we will see enhanced facial analysis on mobile devices long before Argus sells a quantum computer that bursts into tears if you cannot remember where you left its charger. As well, while miniaturisation of advanced imaging hardware is still some way off the software is ready and waiting and rumoured in use by states less democratic than those in the West, during the interrogation of political prisoners.

Never give a sucker an even break. Imagine you and a friend live at a time when reading and writing are the new big thing, and one day you meet someone who has heard of neither. You ask the man to whisper his most guarded secret then write it down and pass it to your friend who is stood out of earshot. Do you explain the basics of reading and writing to the illiterate man or let him believe you have access to mind reading technology? Realising the hold you have over the less informed person; it is tempting to pass yourself off as something you are not.

We are surrounded by technology our stone-age brains have to deal with on a daily basis, and which we accept without understanding in any great depth. We manage to this without resorting to the bizarre idea it was all created by some ‘god like power.’ Mhairi Aitken of the Alan Turing Institute responding, to an essay by Ian Hogarth published in the Financial Times by pointing out that ‘AI’s God like powers is a Big Tech narrative that needs calling out.’ Hogarth probably just shrugged this off thinking ‘Oh well, a nice try’ but more pernicious is the fact that Aitken was right and the public, the media and governments are all being played.

Regulation will not work: Big Tech knows this and only suggests it because the alternative is less palatable. It is a narrative created by companies who realise they have become so powerful they must now be broken up: it is their use of artificial intelligence which has pushed them over the line. The existential threat to humanity is not Google’s generative AI, but Google itself.

The Google News service has always been problematic, however the way it has been used to disseminate coverage of Baird in particular, and generative AI in general, has brought an obvious conflict of interest into sharp relief. Comments by present and former Google employees regarding generative AI created a misleading narrative which was then amplified by the company’s own news aggregation service.

Its use of generative AI will see Google, which already has a near monopoly over the way information is searched and accessed, begin manufacturing information, which in turn will add to the repository of information searched and accessed. This will create an information doom loop.

Google’s Internet search builds a profile of us as we search for information and if generative AI is creating this information there is now scope for manipulating how we interpret anything we find on the Internet. This level of control would go far beyond what was achieved by Cambridge Analytica.

Government’s influence over Big Tech is limited by a near total reliance of elected representatives on social media and internet search-based advertising as campaigning tools.

The regulatory framework Google envisages for the future use of generative AI would see the technology restricted to enhancing the company’s existing services, reducing still further transparency and any external control over how AI algorithms are used.

While Google projects itself as apolitical and non-partisan no government claiming to be democratic should allow an organisation to have near total control of how a citizen perceives the social system of which they are a part. Which brings us back to that picnic table at Virtual Futures 95 and those raindrops falling on the paper I was reading. Yes, I remember now, the topic was cyber fascism and at the time the predictions seemed somewhat hysterical. But if governments fail to break up Big Tech and arrest the creation of an infosphere beyond democratic control the little power they currently retain over companies such as Google will slip from their grasp.

Peter Kruger
Author of ‘The Ghost in the Labyrinth’