Companies continue to grapple with how to use artificial intelligence (AI) technology in a way that doesn’t compromise privacy and that takes into account other key ethical considerations. It is undeniable that AI is one of the fastest-growing categories today. But if we are unable to build tools that live up to high ethical standards, not only are we doing end users a disservice, but we are failing to deliver on the promise of this technology as a force for good.
AI and ethics has been one of the hottest topics within the technology community for decades. However, as an industry, we are still finding it hard to put our best foot forward in terms of ensuring ethical use—let alone solving the problems these systems were meant to address. Fortunately, there is one group that could be the guiding force we need: Developers.
From the ideation process to implementation, the developer community is by far the group with the most intimate knowledge of how AI products actually work; therefore, they can have a tremendous influence on what these tools actually do and how ethical they are—and they need to be empowered to make these calls.
With that in mind, here are four key questions that companies and their developer teams should ask as they work to build the ethical AI solutions of the future.
Is This Transparent in Every Way?
If a developer ever finds themselves saying, “I don’t know,” when asked about how a product works or how the product is ultimately going to be used, production should stop immediately. The power of automation is tremendous. But given how quickly it can morph and learn, making sure that oversight and explainability are ingrained in its functionality is imperative to ensure ethical use. If this can’t be guaranteed, developers should be empowered to flag their concerns and look for ways to remedy them before continuing production.
As AI continues to evolve, designing and developing AI systems will likely continue to lead us into areas where no clear legal, social or policy guidelines address many of the potential uses—and misuses. So, now is the time for developers to establish a transparent and clear channel of communication where they can actively raise concerns, discuss and deliberate on ethical dilemmas with diverse stakeholders. This is crucial in reducing or eliminating any negative effects of AI. Otherwise, we will continue to see the current faux pas around AI ethics continue.
Have we Accounted for Risk or Exceeded Regulatory Boundaries?
Virtually any big idea has its risks. Unfortunately, when it comes to AI, these risks generally involve unethical use or unintended consequences. The ongoing debate around facial recognition and AI is a perfect example of this. While this use case is intended to improve security and may have other benefits, facial recognition software comes with significant privacy risks and fears over misuse and misapplication. Moreover, because the regulatory landscape around AI is still taking shape, companies can fall into a trap of looking to exploit these gray areas in exchange for being the first to market or gaining a competitive edge. This is why companies that are not using ethical privacy-preserving AI in facial recognition implementations, for example, are having to walk back their products.
Simply put, acting in this manner can have terrible consequences for the ethics around an AI product. But there is good news; the fix for this mistake is relatively straightforward: Instead of just making sure that products barely comply with regulatory gray areas, make sure they exceed them. Granted, this can lead to increased go-to-market time and significant leg work. However, this is the only way to realistically “future-proof” products against potential unethical use cases.
Is This System Adaptable?
Even the best preparation and forethought does not guarantee that a tool will operate exactly as it was intended. This is why it is so important for developers to have the ability to plan for contingencies and build adaptability into their tools so that adjustments can be made periodically if any ethical questions arise. This can be particularly helpful when it comes to addressing bias concerns that may occur as AI products learn and expand.
Technology is a dynamic and rapidly evolving category. And as such, developers are often working on very compressed timelines. But to get AI right, developers need time to suss out the bigger picture around their products and bake in everything possible to make sure products are set up for ethical use in the short, medium and long term. If they don’t, not only will reputational damage be done to a company, but a company’s entire product pipeline could be stunted indefinitely as products need to be pulled offline entirely.
Are Developer Teams Actually Prepared to Build Ethically?
Setting a concrete mission statement that is centered around ethics is a great start for any technology company. But without a more systematic approach to instilling principled behavior in developer teams, companies will struggle to drive tangible results in their pursuit of building ethical products. Additionally, given how complex ethical AI development has become—thanks to innovations in quantum computing, among other things—even the most seasoned developers will have trouble keeping up with the developments and desired standards relating to a company’s ethical AI goals.
Therefore, beyond just laying out overarching messaging, companies need to codify their ethical intentions by building an entire infrastructure that equips developers with the ongoing training and resources they need to constantly keep ethical standards in mind. If not, confusion will almost certainly ensue, missteps will occur and ethical standards will slip as a result.
Unfortunately, despite the promise, the AI world has fallen short of meeting the ethical frameworks we need to have in place to both solve the problems of tomorrow while also serving the public good. But this doesn’t have to be the case. By asking these few simple questions, companies can empower developers and build the ethical development infrastructure that is needed for AI to achieve its full potential.