Similar to the internet and smartphones, artificial intelligence, especially generative AI, is the next major technological advancement. However, new technology has its limitations and has a number of problems, such as concerns with accuracy, prejudice and inequity, privacy and security, and content sourcing.
With AI, businesses must advance quickly and become more productive, but trust must come first. In order to address trust issues, Salesforce published five recommendations for safe generative AI development earlier this year.
Additionally, Salesforce offers customised, risk-based AI governance that distinguishes between contexts and uses of the technology, safeguards privacy, fosters trust, and promotes creativity.
Accessible, reliable, and moral AI
The key is a customised strategy: A one-size-fits-all regulatory approach might stifle innovation, undermine healthy competition, and drag out the adoption of technologies that consumers and companies all over the world are now utilising to increase productivity.
“A tailored approach is key: A one-size-fits-all approach to regulation may hinder innovation, disrupt healthy competition, and delay the adoption of the technology that consumers and businesses around the world are already using to boost productivity.”
For instance, while Salesforce supports ethical AI development across the board, a small team of engineers building a generative AI chatbot to teach college students to cook should be subject to fewer regulations and checks than medical professionals using AI tools to aid in patient diagnosis and care plan development.
Salesforce applauds legislators that take a sophisticated approach to creating regulations and who take into account the ideas below to guide society through this crucial period:
Framework based on risk: How technology is utilised has an impact, and some sectors, like healthcare, are more likely than others to put consumers and society at greater danger. Risk-based AI regulation would put the most emphasis on high-risk applications, particularly those with ramifications for the law, the economy, and ethics that could have serious negative effects or affect people’s rights and freedoms.
Differentiation based on use, context, and control: Regulation should make distinctions based on the uses, context, and controls of the technology and set guardrails accordingly. Developers of generative AI, for example, should be responsible for the models’ training and the training data, whilst individuals using the technology should define standards for how the tool should be utilised.
Laws governing data privacy: The cornerstone of ethical AI governance is the safeguarding of the fundamental human right to privacy. Data is the fuel for AI. To protect individual privacy inside the AI ecosystem, additional regulations tailored to generative AI should address the use and privacy of personal data when training future models.
Transparency: AI systems may function as “black boxes,” making it challenging to comprehend how they make decisions. People should be aware of the “why” behind AI-driven advice, given the tools to grasp it, and educated as to whether they are speaking with a real person or a computer simulation.
Responsibility and Governmental Oversight: One strategy to encourage responsibility and trust with high-risk AI systems is to conduct AI impact assessments. In terms of responsibility and compliance, licences or notifications can be helpful. However, they should be applied using a risk-based strategy that balances compliance with market access, competition, and innovation. Therefore, the field will benefit from the basic knowledge provided by risk management frameworks like those supplied by NIST.
Harmonisation and conformity to current regulations: Global data protection rules, for example, are just one of the many laws and policies that already exist that serve as barriers against AI. Regulators and other stakeholders should evaluate and make clear if these issues are covered by existing legislation as they create new guidelines.
Applications that are both universal and future-proof: Given the rate of invention, AI legislation should be globally interoperable and flexible while also being durable. Instead of being centred on a particular technology at a particular time, the regulations should offer a policy framework for the ethical development and deployment of AI systems.
Collaboration between the government, business, academia, and civil society is necessary
Salesforce aggressively engages with governments and all stakeholder groups to establish ethical, risk-based, and internationally applicable AI norms since AI is a significant and quickly growing issue in society.
- Salesforce encourages collaborations between the public and private sectors, including as the G7 initiative and the OECD’s long-standing artificial intelligence (AI) initiatives.
- In the US, Salesforce submitted comments to the Office of Science and Technology Policy (OSTP) and the National Telecommunications and Information Administration (NTIA) dockets on the AI Accountability Policy.
- Salesforce applauds the ongoing developments of the EU AI Act and strongly urges that the risk-based approach be incorporated into the Act’s final wording.
- In order to promote AI innovation, Salesforce has also promised to invest $4 billion in its UK operation over the next five years.
- Through the U.S. National AI Advisory Committee, the Singapore Advisory Council for the Ethical Use of AI, Singapore’s AI Verify Foundation, and the U.S. Chamber of Commerce Artificial Intelligence Commission, Salesforce officials are actively engaged in multi-stakeholder talks.
Salesforce is dedicated to creating trustworthy, open, and accountable AI systems that put a priority on accuracy, fairness, privacy, and having a positive social impact. Salesforce will keep up this dedication as technology develops.