Skip to navigationSkip to contentSkip to footerHelp using this website - Accessibility statement
Advertisement

Google's Anil Sabharwal says it's time to talk about AI and the changes it will bring

Paul Smith
Paul SmithTechnology editor

Subscribe to gift this article

Gift 5 articles to anyone you choose each month when you subscribe.

Subscribe now

Already a subscriber?

Google's chief Australian engineer and global head of its photos and communications tools has warned government and business must answer tough questions about the potential negative impacts of artificial intelligence on jobs and society now, and work together with academia to ensure economies and humanity improve as tech gets smarter.

Canadian-born Australian citizen Anil Sabharwal relocated to Sydney from Google's Mountain View headquarters earlier this year, to head up its 700-strong team of engineers. He remains one of the most influential executives within Google globally, as the head of its photo-organising app Google Photos and its communications tools.

Sabharwal is a believer that AI will prove to be a big positive for humanity, and headlined a Google event in May, where the company sought to highlight the positive impacts AI was already having in areas such as conservation, art and healthcare.

Anil Sabharwal says there are huge potential benefits from the rise of AI, but that a serious public conversation about how things will change, needs to start.  Ben Rushton

However, in a wide-ranging interview with The Australian Financial Review, he conceded there are many crucial ethical, societal and economic questions that still need to be resolved, before the mind-blowing potential is developed into real-life products and services.

Governments around the world are grappling with the impending challenge of automation-led jobs disruption across multiple industry sectors, and the increasing sentience of systems will soon begin posing science-fiction-style ethical questions about how humans should interact with AI.

Advertisement

Tough conversations

"We can't kick the can down the road and say 'I guess we'll see what happens in 15 years', because the decisions we make today are going to create the environment in which AI applications are built and how they are used," Sabharwal says.

"For most of the world, AI is a fuzzy concept. It's an idea they get exposed to from film or books, but those of us that live it day to day understand that we are the ones that are teaching the computers, and are providing the controls and the mechanisms for the way they operate.

"We need to spend time as a society, as a world – because this is not a technology problem, this is a policy question. It is about ethics, transparency, user trust, and it is also about controls."

In its submission to a Senate inquiry into the future of work earlier this year, Google claimed the Australian economy would reap economic gains of up to $1.2 trillion by 2030 from automation.

Google CEO Sundar Pichai published a blog post in June laying out the principles the company will follow in further developing AI capability. Bloomberg

Advertisement

The projection came from a report it commissioned by economics consultancy AlphaBeta, which said the economy could receive a $600 billion boost in the same period if the estimated 6.2 million Australians due to enter the workforce were equipped with the right skills.

However, there are large groups of workers that stand to be displaced by the inexorable rise of AI technology.

Disrupted workers

The hundreds of thousands of Australians who earn their living as various kinds of drivers, for example, will watch developments in driverless technology – including in Google's Waymo division – with fear rather than wonder.

"It really is about up-skilling and the only way this is going to work is if there is an investment across government and across industry to be able to make our workforce that much more productive," Sabharwal says.

Sabharwal says governments, tech companies and citizens need to agree upon ethics, transparency, user trust and controls with AI. Ben Rushton

Advertisement

"The conversation with a taxi driver needs to be honest, but positive about the fact that the problem of transporting people is being solved in a way that will save lives. However, we will always need people to provide capabilities in things related to transport that only humans can do.

"It is our responsibility as industry and government, to be able to say to this taxi driver, 'We've got you, and here's how we're going to help you find a job that you love just as much, that you are just as good at and that pays you income-wise just as much, if not more.'"

Asked if it is the responsibility of governments to find jobs for those displaced, Sabharwal concedes that he does not have the answers to workforce displacement.

But he says that this broader uncertainty is why it is essential for politicians to engage with industry, researchers and education providers immediately to ensure that citizens are still able to fulfil valuable roles and contribute to society in meaningful ways if their jobs disappear.

Australian academics have used Google's open-source machine learning platform TensorFlow to build a detector that can find dugongs in thousands of ocean photos. 

Making us less human?

Advertisement

He describes AI as the multiplier of human ingenuity, meaning that we should be working towards a future where technology enables humans to do more, rather than to replace tasks and create a society full of underemployed or lazy citizens.

In professional terms, this means automation freeing up staff to focus on higher-value work, while in personal lives it means using technology to enable humans to do more than they could before, rather than performing existing tasks without time or effort.

Google has numerous product lines, which arguably take human thought and traditional learning out of the equation. Google Translate could very feasibly evolve to Babel-fish-like levels, where natural conversations can occur between people of different nationalities with no need to ever learn a foreign tongue.

In May Google introduced a Smart Compose auto-complete feature for Gmail, which means users can select prewritten responses to emails, rather than thinking and writing them for themselves.

Joaquin Phoenix in the 2013 movie "Her", in which his character falls in love with a virtual assistant that is intelligent enough to converse with him. 

Sabharwal says he is optimistic that people will use such AI-led advances to achieve more with their time, rather than becoming stupid and reliant on machines.

Advertisement

"Once we have to stop doing the things, like learning French to be able to have a conversation with someone, we are going to find new things to do, because humans are not built in a way that we just sit here and eat potato chips because something else is doing it for us," he says.

"As computers have advanced and we have gone from using the abacus to having amazing computational power, we haven't limited ourselves to basic mathematics. With AI it is going to be the power of the human and the computer working together.

"AI is now augmenting brain power, to allow us to do that much more than we couldn't do before. So I don't have any fear that we are going to go in the other direction and stop thinking and stop being people."

AI in action

After Google become embroiled in scandal related to the use of its AI technology to analyse drone footage for the US Department of Defence, chief executive Sundar Pichai published a blog post in June laying out the principles it would follow in further developing AI capability.

It said AI applications must be socially beneficial; avoid creating or reinforcing unfair bias; be built and tested for safety; be accountable to people; incorporate privacy design principles; uphold high standards of scientific excellence; and be made available for uses that accord with these principles.

Advertisement

Additionally, he said Google would not pursue any AI technologies that could cause "overall harm"; create weapons or other technology designed to facilitate injury; be used to gather or use information for surveillance violating internationally accepted norms; or contravene widely accepted principles of international law and human rights.

At its recent AI event in Sydney, Google highlighted various examples of positive use of its AI technology in Australia, including work by academics from Murdoch University and Queensland University of Technology, who have been using drones to photograph the ocean to try to keep track of endangered dugongs.

By using Google's free open-source machine-learning platform TensorFlow, they have been able to build a detector that could learn to automatically find dugongs in the thousands of photos, without needing as many human hours spent poring over them.

It also showed work conducted by a Brisbane-based medical data specialist, which has used Google's Cloud AI to "train" a system to analyse hundreds of thousands of prostate cancer images. It has meant scan analysis being delivered to clinicians in 10-15 minutes, rather than two to seven days by traditional diagnostic methods.

Australian opportunities

Despite AI and machine learning being one of the most commonly quoted strategies in corporate technology plans in the past year, Sabharwal says adoption in Australian businesses remains nascent.

Advertisement

He says Australia has a great opportunity to specialise and lead in some areas of AI and nominates healthcare as a great example.

"With all the challenges that we face with skin cancer in Australia it offers an opportunity, because the difference between a melanoma diagnosis being cancerous and having massive implications to it being benign is minuscule in terms of the observation of images," he says.

"We will always want humans looking at this with their expertise, but we can look at augmenting that and providing assistance and support to those individuals, so they can have additional data to make their decisions faster."

More personalisation

Aside from the broader opportunities, Sabharwal is viewing AI as an ever important tool in his kit to improve the Google services under his control, such as Photos.

Users of Google Photos are experiencing AI, often without realising it, as the app recognises people in pictures and offers one-click options to share the image with them, and also provides suggestions on how to edit pictures while they are being viewed.

Advertisement

Sabharwal says he believes AI will soon offer much greater options for services to be personalised. He compares the experience of having a human personal assistant and the current digital assistants, and says the main difference is personalisation, because a human assistant will spend their first day asking questions about how their boss likes to do things, and will tailor their work accordingly.

He says AI will soon enable Google to personally tailor apps, so that one person's Google Photo app will look completely different from another's.

"Today, to a large extent, we are still building products and experiences that are broad strokes. And there's some personalisation that's happening off the basis of content," he says.

"Trying to have us all have the exact same product and experience is flawed, and I would say within a five-year horizon our apps will be evolving and adapting as we use them and teach them about what is important to us."

Artificial relationships

The increasing personalisation of technology through AI is another area that has proven fertile for science-fiction movie makers. The 2013 movie Her, for example, follows the story of a man who falls in love with a virtual assistant that is intelligent enough to converse with him.

Advertisement

While there are clear benefits from the development of increasingly smart chatbots in customer service and in the potential for companionship for the socially isolated, Sabharwal says it is important that it is not just left to those working in technology companies to decide how things should develop.

He says politicians and broader society needs to discuss questions of ethics and possibilities and to try to figure out the right applications of AI, and what parameters should be put in place.

"There are always scenarios where, if you take something to the extreme, you can devise a way in which it doesn't end well," he says.

"It makes for a great Hollywood film, but it's up to us to make sure that we actually put the right principles in place and that, as humankind, we make the determinations that we want to make.

"We will need the right resolutions and the right models in order to be able to achieve good outcomes, and put the right conditions in place to try to prevent bad actors."

Paul Smith edits the technology coverage and has been a leading writer on the sector for 20 years. He covers big tech, business use of tech, the fast-growing Australian tech industry and start-ups, telecommunications and national innovation policy. Connect with Paul on Twitter. Email Paul at psmith@afr.com

Subscribe to gift this article

Gift 5 articles to anyone you choose each month when you subscribe.

Subscribe now

Already a subscriber?

Read More

Latest In Technology

Fetching latest articles

Most Viewed In Technology