At Google’s annual Google I/O developer conference, the search giant showcased a number of impressive implementations of artificial intelligence.

One demonstration showed how Google Assistant could autonomously book a haircut and make a dinner reservation with some convincing machine to human conversations.

Impressive, but what happens if Google Assistant eventually learns how to masquerade as its owner?

Google chief executive Sundar Pichai demonstrated a number of new AI capabilities for Android and Google Assistant designed to do one thing — convincingly emulate human conversation.

In so doing the company has thrown into stark relief a number of sweeping questions concerning identity and privacy.

During Sundar’s keynote demonstration, the Google CEO showed how Google Duplex, a new AI service running inside Google Assistant, could autonomously dial brick and mortar businesses and arrange for services.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

Using nothing more than a simple, spoken instruction: “Make me a haircut appointment on Tuesday morning anytime between 10pm and 12pm,” Google Duplex dialled a hair salon and proceeded to negotiate for a time slot.

The technology even went so far as to clarify the type of haircut desired, presumably drawing upon the contextual awareness of the user’s past haircut appointments.

A second demonstration, concerning the arrangement of a dining reservation, was even more impressive because the call did not go according to plan.

The person speaking with Google Assistant did not accurately hear the original request, which prompted the software to restate it.

It did so with a surprisingly nuanced use of audible social cues such as the word “umm” to show a sense of patience in correcting the human’s failure to hear the original query.

Equally impressive, in the second demo the Google Duplex AI learned that the restaurant would not accept a dining reservation for the desired date/time.

It then reasoned that the next logical step would be to find out if the restaurant would be too busy at that time for a walk in visit, ascertaining as much using a convincing and unscripted verbal exchange.

In 1950, computer scientist Alan Turing foresaw this moment when he proposed the Turing Test, a means of evaluating whether or not a machine could pass for a human in conversation.

Even with Pichai’s two short demonstrations at Google I/O, it is clear that Google Duplex did indeed pass this test with flying colours.

Google Duplex would not be the first computer programme to do so. Google Duplex did a similar thing using spoken language as opposed to text on a screen.

Google is likely to take the idea of an AI masquerading as a human even further later this year, when it rolls out an iteration of the Google Assistant voice that replicates the tenor and tone of American R&B singer John Legend.

The default female voice used by Google Assistant right now is actually based on a real person (code named Holly), but she had to record a massive number of words and phrases in order to sound moderately convincing in day to day use.

Using a programme called Google Wavenet, Sundar demonstrated John Legend reading out items such as a daily schedule in a way that sounded entirely natural and fluid in both tone and cadence.

Google Wavenet freed Legend from having to spend hours in front of a microphone in order to digitise his voice for conversational purposes.

Google intends to use Wavenet not just for celebrity appearances but also to better accommodate different languages and local dialects. But it could also be used to adapt a user’s voice to the same purpose, replacing the stock Android Assistant.

This possibility represents both the promise and peril of AI.

By using AI to this degree, Google may irreversibly blur the lines between human and computer.

A user could plausibly replace Google Assistant’s voice with his or her own through AI algorithms like Duplex and actually instruct his or her phone to carry on convincing conversations.

What happens when the computer sitting in the palm of your hand right now can convincingly masquerade as you?

If a computer can indeed pass as a person (a true avatar), companies, governments and people will need to radically rethink a number of basic assumptions concerning identity and responsibility.

If Google Assistant, for example, negotiates on behalf of a person for something deemed illegal, is that person legally bound by the resulting transaction?

If a person’s Google Assistant avatar is compromised, is that actual identity theft? Would Google Assistant be legally bound to identify itself as a machine and not the owner? If so, would that apply to personal conversations?

The answer to such questions will likely come from both corporate mandates such as Microsoft’s Human Rights Statement and governmental controls such as the fast-approaching GDPR regulation in Europe, which both  seek to define some clear lines of ownership and responsibility between people, software, and data.

Without such guidelines, the rapid rate of innovation as demonstrated by Google with Duplex and Wavenet may serve not to better society but instead to undermine our trust in one another.