Source: The Guardian
Zoom being Zoom, Tim Berners-Lee’s name appears in my browser window about 20 seconds before his audio and video feed kick in – and for a brief moment, the prospect of talking online to the inventor of the world wide web seems so full of symbolism and significance that it threatens to take my breath away.
During the hour we spend talking, that thought never fully recedes – but the reality is inevitably rather more prosaic: a 65-year-old man in a slightly crumpled, light blue polo shirt, talking – usually at high speed – from his home a dozen or so miles from Oxford, at a desk positioned just next to a fancy-looking model house (“I think that’s a mansard roof,” he says). For all that he is one of a tiny group of people who can claim to have fundamentally changed how most of us live – which explains why he had a role in Danny Boyle’s opening ceremony for the 2012 Olympics – he carries himself with a striking lack of star power. He could probably walk down the average high street unrecognised; as if to underline that the human race may now have its priorities slightly wrong, at 345,000, his Twitter followers number less than 5% of Piers Morgan’s.
Just over 30 years have passed since Berners-Lee, who was then working at the Swiss HQ of Cern (AKA the European Organisation for Nuclear Research, and the largest particle physics laboratory in the world) came up with the initial idea for the world wide web, conceived at first as a way of easily sharing research documents. (While the terms are now used synonymously, the internet is the physical network of interconnected computers and dates back to the 60s, while the web of Berners-Lee’s invention is the system that enables information to be exchanged using the internet’s connections.)
His idea was labelled “vague but exciting” by one of his bosses, but once he had built the first browser and website, interest and involvement in it soon snowballed. There was, however, one fascinating and inevitable twist: unlike the kind of inventors who patent their creations and keep some measure of control over what happens to them, Berners-Lee introduced his work to the world, and then simply watched what millions of people did with it.
That said, via the World Wide Web Foundation, founded in 2008 to “advance the open web as a public good and a basic right”, he has long been a key voice in the global conversation about the online world and where it is taking us. Since 2017, the foundation has published an annual letter by Berners-Lee on 12 March, the web’s anniversary. This year’s deals with a huge issue highlighted and intensified by the pandemic: the fact that 2.2 billion young people across the world are lacking the stable internet access they need to learn online. From there, it pays tribute to a handful of young trailblazers who have recently used the web for such projects as turning plastic waste into personal protective equipment and working to protect “women, non-binary people and other at-risk individuals” in some south Asian countries. And it makes the case for technology that is “helpful, not harmful; inclusive, not exclusive”.
The latter point comes down to online abuse, bullying and harassment, and the way that these aspects of online life disproportionately affect young people. Among Berners-Lee’s arguments is the insistence things could easily be safer and more humane, if the people who build big platforms thought more carefully about their users. Blocking functions and other such tools, allowing people to focus their engagement on trusted friends, are hardly complicated, he says, but they often seem to be. “You need all those controls to be simple and obvious enough to be used by young people. That’s one example. Another thing is, if you’re asking people if they’ve experienced any problems, you have to speak to them in their language. People don’t think, ‘That was online abuse.’ They think, ‘That hurt.’ You need to talk to them in their language, and make it clear that they have options.”
The letter pointedly refers to “the toxic internet”. Can he explain what that means to him?
“Well, you’re not a feminist blogger,” he says. “If you were, you wouldn’t ask me that. If you’re a woman, you can have a fine time until the one moment you get picked on. And it can be psychologically very, very extreme. The toxic internet is something that young people have less ability to manage. Women will be more susceptible. LGBTQ people, typically, are much more likely to be picked as a target. The toxic internet isn’t something I experience at all. You may not.”
By way of pointing towards something better, he talks about the possibility of “social networks where bad things happen less”. He mentions Twitter’s Birdwatch project, which encourages its users to flag up misinformation, and also mentions his idea of social media platforms offering their users “stretch-friends” – people from way beyond their usual circles, who might enhance their understanding of the wider world. “Suppose we build community curation systems, just like Wikipedia is a community curation system for an encyclopedia,” he says. “Can we build real-time curation systems for what we think is appropriate? Can we build systems that lead people to being more constructive, and more likely to understand what it’s like to be on the other side of a cultural divide, and more likely to figure out what these other people who speak different languages are like?”
It might seem as if nastiness, loathing and what the US academic Shoshana Zuboff calls “surveillance capitalism” are wired into the modern web as a matter of design, but Berners-Lee argues that reshaping the online realm along more positive, human-centred lines might be simpler than it seems. To illustrate his point, he talks about the widely acclaimed Netflix docudrama The Social Dilemma, which traces the dire effects on a family – in particular, a young brother and sister – of an addictive internet strewn with lies and conspiracy theories, and platforms seemingly configured to maximise the profile of those things.
“It makes the case against one particular way in which things go wrong,” he says. “It talks about a family who use social media, and they use advertising, and they are manipulated by the AI in the social network to maximise engagement, which maximises engagement with nasty stuff. So they end up believing a bunch of conspiracy theories, and the family becomes a very broken subset of humanity, just as a result of interacting with the social network. It’s powerful.
“But then you should step back. It’s making a good case about how one particular wave of social networks can work, if you train the AI to maximise the engagement of the teenager. But if you train the AI to maximise the happiness of the teenager, or the efficiency of the teenager – well, then the whole thing would produce a very different outcome. You use the same software – you just turn the dial. You could imagine two social networks where most of the code is mostly the same – it’s just that one is optimised for one thing, and the other is optimised for another. And the unintended consequences in each case are completely different.”
Berners-Lee’s backstory goes back to the first stirrings of the tech age. His parents, Conway Berners-Lee and Mary Lee Woods, were part of the team that produced the Ferranti Mark 1, the world’s first commercially available digital computer. He was a childhood trainspotter and, after graduating from Oxford (in physics), his working life began with a job at the now defunct British electronics firm Plessey (whose recruitment ads sometimes featured the enticing slogan “good job, good company”). After a first stint at Cern that ended in 1980, he spent three years at Image Computer Systems, based in Bournemouth. If all of this sounds almost comically mundane, there are things in his cuttings file that perhaps betray a romantic, future-gazing kind of mindset: when his two children were growing up, he would apparently make a habit of telling them that: “Everything you don’t understand is magic.”
He is now a professorial fellow of computer science at Oxford University, but focuses most of his working life on Massachusetts Institute of Technology (MIT) in Boston, where he has been working for four years on a fascinating project called Solid (more of which in a moment). Like most of the internet’s pioneers, he began his journey through the past three decades with hopes that verged on the utopian and endured for a long time. In 2010, he may have warned against big social networks threatening the web’s “single, universal information space”, but in 2014, he was still enthusiastically making the case for how its global connections could “transform presidential elections, overturn authoritarian regimes … and enrich our social networks”. He says it was not until the watershed year of 2016 that he fully realised how drastically things had changed.
For the first phase of the web’s progress, he had a set of stock lines about how people’s use of it would tend to keep everything on the moral straight and narrow. “People complained about junk being on there,” he says, cracking a smile at the quaintness of the memory. “They said, ‘Well, you made this thing, and I found some nonsense on it.’ And for many years, I would say, ‘The web is a neutral platform. Humanity uses it, and humanity is good and bad, across the spectrum. What you have to do is just to nurture your bookmark list: take off the things that haven’t served you well, and nurture all the things that have served you well, and blog about them, and link to them.’”
For a while, he says, he thought this worked. “People read each other’s blogs, and they linked to blogs they thought were brilliant, and the blogosphere became this incredibly rich medium. In those days, I and all the people I knew curated a web of good stuff. And there was good stuff: there were blogs, and then Wikipedia came along, and there were search engines that found stuff out really quickly.”
By 2016, he says, “people were still curating their bookmarks” – or, rather, consuming and propagating material they found on Facebook and Twitter – and among his liberal-inclined friends and associates, he says this seemed to be as positive and truth-promoting as ever.
“Meanwhile, there were far-right people, who were also quite happy with the web, bookmarking articles pointing to complete garbage, and conspiracy theories, and things that were completely untrue. So I realised that I and the people I knew were in a huge bubble. In the States, they were in a blue bubble, and then there was a red bubble … but it was no skin off our nose that there was this other red bubble. Apart from one thing: the people in the other bubble had the vote. And they believed all kinds of nonsense things, like the idea that they would get £350m a week [for the NHS] if we pulled the Brexit plug. After Brexit and Trump, I think a lot of people realised: ‘We need to have a web that spreads more truth than rubbish.’ And at that point, the Web Foundation said: ‘It’s not just about getting the web to everyone, it has to serve humanity in a positive way.’”
This basic argument has now belatedly started to make its way into politics, something seen in both an increasingly loud conversation about the responsibilities of the big platforms for misinformation and hate speech – and the accompanying conversation about tackling the same platforms’ huge concentrations of power. On this stuff, Berners-Lee’s opinions are delicately balanced. To take two topical examples, he is opposed to Australia’s plans to force tech giants to pay news organisations even for the use of links to their articles (“the right to link is really important – it’s just part of free speech, and it makes the web functional”), though when I ask him about the possibility of Google and Facebook being forcibly broken up, he sounds at least open to the idea.
Dominance of the web by a tiny handful of companies, he reminds me, is hardly new, but over the years, things have always shifted. Look at the history of browsers: Netscape was succeeded by Internet Explorer, which in turn was nudged aside by Google Chrome and Apple’s Safari.
To that, there is an obvious riposte. The position of the 21st century’s big players looks very different.
“I think what the American public and lawmakers are doing … I think they’re aware of that,” he replies. “And they know from experience with big oil and with [the American telecoms utility] AT&T, that there have been times when US governments have broken up large companies. There’s a lot of discussion of that right now, and so that is a possibility.”
Is it a possibility he supports? Would he like Google and Facebook to be broken up?
“I think it’s more complicated than that. I think it’s reasonable for the Federal Trade Commission to be strengthened, and be empowered to look at it, and to do the analysis. What you could do with these sorts of things is …” A very long pause, then he resumes. “You also need to talk to legal experts. But the US government does have the tools necessary.”
Since 2016, Berners-Lee and some of his colleagues at MIT have been working on another way of challenging big tech’s power: starting to reconfigure the web so that its users own and control their data, which might in turn power a more diverse and decentralised internet. This is the basis of the Solid project, which now has spawned a new startup called Inrupt. It all works on the basis of users controlling their data in online storage spaces called Personal Online Data Stores, or Pods. (“That’s in the cloud – or, if you’re really geeky, you have it at home, sitting in a physical box.”) An increasing range of newly built apps are being designed to work with this new model: instead of surrendering their data to be indiscriminately used by big platforms, Solid users will judiciously allow everything from social media sites to shopping services to access their personal information on a case-by-case basis.
Even if he is uncomfortable with the phrase surveillance capitalism (“The world is more complicated than just sticking a term on things,” he says), what he is working on will undermine that business model, if it is successful, won’t it? “Yeah. It turns the world the right way round. The idea of surveillance capitalism depends on your data going, by default, to somebody else, and Solid’s default is that it goes to you.”
Though Inrupt is now working on the Solid “privacy platform” with such organisations as the BBC, the NHS and NatWest bank, its first stirrings seem terribly quiet: the day before we talk, I sign up to an embryonic, Solid-based social media platform called Ibex, but 36 hours later, my one post (“Hello all – Monday in the UK, and everything is coming apart … Is anyone there?”) remains unanswered. This, says Berners-Lee, is because more work has so far gone into the project’s technological nuts and bolts than its apps. I want to apply all the Steve Jobs criteria – it should be simple, and it should be sweet: ‘I want one of those.’”
Does he have a time frame for that?
“I think it’ll probably take a year or two. But if you’re a developer, then this is the time to join in.”
Not for the first time, despite his worries and misgivings about where his invention has ended up, he sounds convincingly optimistic. And, I wonder: amid the voices telling us that the web has turned out to be a dystopian hell, does that feel like a lonely vocation?
“No. My day job is at Inrupt, working on Solid. We’re very, very positive. We’re building systems which are demonstrably fairer, demonstrably more constructive, which demonstrably give power to the user.”
Just for a moment, he slows down. “I’m lucky. It’s a very positive environment to be in. The people I’m working with have bright lights in their eyes.”