Google’s machine-learning technology, TensorFlow, is now open source, which means anyone is free to use it. Picture: ISTOCK

THE development of smarter and more pervasive artificial intelligence is about to shift into overdrive with the announcement by Google this week that TensorFlow, its second-generation machine-learning system, will be made available free to anyone who wants to use it.

Machine learning emulates the way the human brain learns about the world, recognising patterns and relationships, understanding language and coping with ambiguity.

This technology, which already provides the smarts for Google’s image and speech recognition, foreign-language translation and various other applications, is now open source — the source code is freely available and can be modified, developed in new directions, and redistributed in the same way that the Linux operating system is open.

There are gold-rush opportunities for imaginative commercial developers and scientists. For example, a multilingual virtual assistant that anticipates your needs using your daily activity patterns combined with improved natural speech, image and pattern-recognition — to know what you want, and when and how you want it. It might also use augmented reality to overlay a real-world environment with sound, video, images or GPS data.

The Conversation

TensorFlow can also be told to search through large data sets for something of value to you, whether it is for research purposes, business intelligence or public safety.

By making TensorFlow open source, Google is playing the long game. It’s positioning itself at the centre of a growing machine-learning community instead of pursuing short-term profit by selling the software or keeping it to itself. In time, any number of serendipitous developments will emerge from such an open community.

But Google has its work cut out to convince existential risk sceptics that it is still committed to its philosophy of doing business "without doing evil".

Intuitive applications that have an intimate place in your life will proliferate because people want them, and there is much research and development going into getting the underlying sense-making engine to work properly.

Some people are going to be very worried, while others will be delighted.

Microsoft founder Bill Gates and theoretical physicist Stephen Hawking have their doubts, while others, including the Massachusetts Institute of Technology’s Rodney Brooks, believe that extreme artificial intelligence predictions are "comparable to seeing more efficient internal combustion engines … and jumping to the conclusion that the warp drives are just around the corner".

History is replete with doomsday warnings — from asteroids and tsunamis, to nuclear annihilation and climate change. Now we can add evil (or amoral) robots, artificial intelligence capable of exterminating us. But a more moderate person might recognise the need to develop safety protocols and risk-management strategies, and get these to industry leaders and policy makers, as suggested by the Centre for the Study of Existential Risk at Cambridge University.

David Tuffley is lecturer in applied ethics and socio-technical studies at Griffith University, Queensland, Australia. This article first appeared in The Conversation.