The rush to make AI a self-service endeavour, to make it available to a broad swath of business users, may carry unanticipated legal exposure for companies unprepared to protect AI from human bias.

Salesforce.com has added a new artificial intelligence (AI) learning module to its Trailhead developer education platform with an interesting twist. Rather than teach developers how to build AI outcomes most efficiently, the company’s newest educational module asks that practitioners slow down and focus on creating ethically informed AI solutions.

The new Trailhead educational module entitled, “Responsible Creation of Artificial Intelligence,” calls attention to an often overlooked threat from AI, namely unwitting human biases and intentional human prejudices.

Within these new training materials, Salesforce.com calls on Salesforce.com Einstein developers to adopt its own set of core values of “trust, customer success, innovation, and equality.” The company goes so far as to suggest that developers who fail to adhere to these standards in creating AI algorithms may find themselves in breach of its acceptable use policy.

Why is Salesforce.com referencing an acceptable use policy in conjunction with the ethical use of AI? Surely companies not engaged in outright nefarious endeavours would steer clear of anything overtly illegal in building AI outcomes. Certainly, legislative controls such as GDPR and the California Consumer Privacy Act (CCPA) are very clear about what constitutes unlawful use of consumer data. Companies need only adhere to such policies to avoid potential litigation or censure, right?

Not necessarily. Human biases and prejudices can find their way into any AI-informed solution without detection. Throughout the lifecycle of a given AI solution, from data collection to ongoing maintenance, subtle but hugely impactful notions of partiality can find their way in, thereafter altering the decisions made by both humans and AI automated routines.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

In most cases, biased or unfair AI algorithms remain unnoticed. Only those outliers that are blatantly stilted garner the public’s attention, as was the case last October when Amazon noticed that it’s new talent recruiting algorithm quite literally hated women. Despite the company’s leadership role in developing AI technologies, Amazon fell prey to a common, data-derived bias where the data set used to train its recruiting model was itself biased toward hiring men over women.

Unfortunately, there is no software or best practice currently available that can readily identify or root out these potentially costly threats. Still, last September, IBM attempted to do just that, launching AI Fairness 360, an open source toolkit containing 70 fairness metrics and 10 bias mitigation algorithms. This toolkit is in no way a safeguard or systemic remedy, but it makes for an excellent start in identifying the most basic and easily identified problems such as biases hidden in training data, due to either prejudice in labelling or under/over-sampling select advantaged or disadvantaged groups.

Other firms such as Alegion are doing the same, but coupling automated routines with high-touch consultative efforts that completely offload AI model training and data preparation tasks. This approach will find a welcome home with customers that do not have a significant degree of data science expertise. But as illustrated so expertly by Amazon, expertise alone isn’t enough when it comes to the psychology and sociology of bias or prejudice.

This human factor is the key, especially for Salesforce.com, which has sought to make AI a self-service capability across its sizable customer base. The more readily accessible AI becomes, the greater the legal and financial exposure for both Salesforce.com and its customers — hence the company’s not-too-subtle reminder that improper use of AI can lead to a breach of its terms of use policy.

According to Salesforce.com’s new Trailhead module, the best way to combat bias and prejudice is through the healthy application of human diversity. By building diverse teams and by translating values into processes, Salesforce.com believes its customers can at least create an atmosphere of impartiality and objectivity.

Given Salesforce.com’s past work to make AI more accessible to its customer base (supporting AI modelling for small data sets, for example), we would anticipate the company to do far more than render advice on this front. In the meantime, its none-too-subtle call for the creation of an ethical groundwork well before the creation of any AI algorithm must be taken seriously by any company hoping to reap the rewards of AI.