An Amazon Web Services (AWS) director has said that if the cloud provider had a “time machine” the company would take a different approach to the security of its most popular cloud product, AWS S3, to prevent customers from accidentally leaking data.

AWS is the market leader in cloud services and is used by more than a million businesses to store their digital assets. Its AWS S3 bucket, a data repository, has been the centre of a sizeable number of data breaches in which a customer has mistakenly made it publically accessible by anyone online.

Ian Massingham, director of developer technology & evangelism at AWS, said that in hindsight two separate cloud products – one public and one private – would have averted such data breaches.

“If we went back in time and created the service again, we probably would create S3 public and S3 private and we wouldn’t have the issue,” Massingham told Verdict in an interview for sister publication, Verdict Encrypt at AWS Transformation Day, London.

“You’d have to create a resource in the public service in order to do that. But we don’t have a time machine, unfortunately.”

AWS S3 buckets have always been private by default since they launched in 2006. Over the years AWS has rolled out many additional security measures to make it harder for customers to make S3 buckets public by mistake.

Netflix, the Dow Jones, Accenture and the Pentagon are among those that have spewed sensitive data thanks to a misconfigured S3 bucket.

Massingham said that when AWS launched in 2006 it was an “experiment” and that the Amazon subsidiary had “no idea how popular” its service would become.

“Maybe if we’d known the service would be so heavily adopted, we might have made a few different design decisions right at the beginning,” said Massingham.

AWS S3 security: Countering user error

Security researchers point out that AWS has taken the correct action in light of customers’ security slip-ups.

“AWS is taking this seriously and is making it harder for people to make these mistakes,” said security researcher Noam Rotem who, along with Ran Locar, has made a name for himself uncovering data repositories with poor security.

“However, you cannot stop idiots from being idiots. AWS cannot prevent the ability to open the database to external connections. They can advise against it and they are doing a good job at it.

3 Things That Will Change the World Today

“But if someone actively goes against best practices and common sense, there is little they can do about it.”

However, Massingham does not believe the blame should land squarely at the feet of the user.

“We’ve got to do a better job of ensuring that customers are aware of the potential risk posed by having lax security policies on all cloud-based resources – not just on S3 buckets,” he said.

Misconfiguration of cloud services is not limited to AWS, with rivals Microsoft Azure and Google Cloud Platform having also been susceptible to user error.

In addition to making it harder to mistakenly make an S3 bucket public, AWS has sent out awareness campaigns to its customers.

Massingham’s main advice for organisations using AWS is to re-evaluate the security levels on their S3 buckets.

“If they’ve got any usage of S3 in their organisation, no matter how old it is, or how they use it, it would be great for them to check the bucket policy they have matches the use case they have in mind,” he said.


Read more: AWS Transformation Day: 4 tips for successful cloud projects


You can read the full issue of Verdict Encrypt for the latest B2B cybersecurity insights here.