The ChatGPT Ban: Why Not Using ChatGPT Will Hurt Your Business

Rocío Bachmaier
CEO & Co-founder
July 13, 2023

The European Union's AI Act (AIA) is set to revolutionize the regulation of artificial intelligence, establishing a comprehensive framework to ensure responsible and ethical AI practices. This landmark legislation not only holds significance for businesses operating within the EU but also aims to set a global standard for AI regulation.  

As we discussed in our previous article The AI Act Unfolded: A 7 Minute Briefing on Why it Matters to Your Business, the AIA introduces crucial guidelines and requirements that businesses need to follow. Failing to comply with AIA can lead to severe financial consequences, including fines of up to 30 million euros or 6% of annual revenue, whichever is higher.  

   By proactively preparing for the AI Act, businesses can take a strategic step in their AI journey.  

Therefore, ensuring compliance with the AI Act is essential to adhere to regulations and mitigate the risks associated with AI use. By proactively preparing for the AIA, businesses can already take a strategic step in their AI journey towards safeguarding their operations, avoiding substantial penalties, and fostering trust among customers, authorities and employees regarding their responsible AI practices.  

In this article, we delve into the EU AI Act requirements and present three actionable steps that businesses can take for compliance and ethical AI operations, empowering them to stay ahead in the evolving AI landscape.  

   Unpacking Compliance Categories  

   Requirements from the AI Act are tailored to the risk category of AI systems, ensuring a proportional approach to regulation and covering different areas such as transparency, accountability, risk management, data quality, and reporting obligations.  

   The EU AI Act defines a classification of AI systems based on their risk levels, dividing them into four distinct categories: unacceptable, high, limited, and minimal. While the Act primarily focuses on regulating AI systems with a high level of risk, it is crucial to understand the compliance requirements associated with each risk level. See Exhibit 1 for a quick snapshot below.  

   1 - Unacceptable Risk:  

   AI systems falling under the unacceptable risk category are strictly prohibited within the European Union. These include systems associated with social scoring or biometric identification that pose significant risks to fundamental rights and freedoms, with a few exceptions laid out in Article 5.  

   2 - High Risk:  

   For high-risk AI systems, additional safeguards and compliance measures are necessary to ensure safety, accountability, and transparency. The compliance requirements for high-risk systems include:  

  • Human Oversight: High-risk AI systems must incorporate human oversight to ensure their safe and accurate operation.
  • Transparency: Clear disclosure of the types of data used, the underlying algorithms, and decision-making processes of the AI system.
  • Risk Management: Thorough testing, proper documentation of data quality, and establishment of accountability frameworks to manage potential risks.
  • Data Quality: Ensuring the accuracy, representativeness, and absence of bias in the data used to train high-risk AI systems.
  • Monitoring and Reporting: Implementing mechanisms for ongoing monitoring, incident detection, and reporting to address any issues that may impact fundamental rights or safety.

   3 - Limited & Minimal Risk:  

   AI systems with limited or minimal risk have fewer compliance requirements compared to high-risk systems. However, organizations deploying limited-risk AI systems should still focus on transparency obligations. This includes making users aware that they are interacting with an AI system, disclosing system characteristics like emotion recognition or biometric classification, and notifying users if AI-generated content may falsely represent its origin or nature.  

Exhibit 1

   Three Actionable Steps to Prepare and Thrive  

   The implementation of the AI Act will be only the beginning of a global effort to mitigate AI risks. It is important to note that, even though the EU AI Act is still under development and has not yet gone through the full legislative process, businesses that proactively define a strategy to address these risks will position themselves for long-term success in this rapidly evolving technological landscape.  

   Unfortunately, many companies underestimate the importance of risk management, jeopardizing not only their own operations but also potentially causing harm to end users and society. However, organizations will have the opportunity to leverage the framework provided by the AI Act as a foundation to develop their own internal AI risk-based classification.  

   Organizations will have the opportunity to leverage the framework provided by the AI Act as a foundation to develop their own internal AI risk-based classification.  

   By embracing the AI Act and its risk management guidelines, organizations can establish a standardized approach to address the challenges associated with AI. This will not only create a level playing field but also foster a more competitive and homogeneous AI landscape.  

   This process should include:  

   

  1. identifying all the AI systems being used and regularly assessing the level of risk for each of them
  2. taking measures to reduce those risks, and
  3. defining an AI governance framework

   To do this, the organization needs to have a clear strategy for how AI fits into its overall goals, a system for multiple checks before using the AI, and strong protocols for keeping data private and secure since many AI systems process sensitive personal data.  

   Let’s review 3 key steps companies can already take to move towards compliance and suggested actions. See Exhibit 2 below for a quick snapshot.  

   1. Build an inventory of AI systems and their risks  

   Create a list of all the AI systems that the company is using or plans to use, and identify the risks associated with each system. Suggested steps:  

   

  • Conduct a thorough inventory of AI systems used or planned for use within the organization.
  • Perform regular risk assessments of AI systems to ensure they meet internal policies and, in the future, the required regulations. This assessment involves checking if the systems have been developed with proper considerations for privacy, accuracy, and potential risks on safety and human rights.
  • Categorize AI systems based on the risk levels, departing from the EU AI Act classification: unacceptable, high, limited, and minimal.

   This will help the company understand where AI is being used, its risks, and in the future, assess any potential compliance gaps.  

   2. Implement Risk Mitigation Measures  

   Some of the safeguards and mitigation measures necessary to address the identified risks based on the AI risk classification framework are:  

   For high-risk AI systems:  

   

  • Establish human oversight mechanisms to ensure safe and accurate operation. This may involve human intervention in decision-making processes or regular audits by human experts.
  • Implement transparency measures to disclose the data used, algorithms employed, and how decisions are made by the AI system.
  • Develop robust risk management protocols, including thorough testing, documentation of data quality, and an accountability framework.
  • Implement cybersecurity measures to protect high-risk AI systems from potential adversarial attacks.

   For limited and minimal-risk AI systems, organizations will need to adhere to transparency obligations:  

   

  • Disclose to users that they are interacting with an AI system.
  • Provide clear information about the AI system's functionalities, limitations, and any potential risks associated with its use.
  • Specify if the system employs sentiment analysis (such as emotion recognition in chatbots), or biometric classification techniques (face-recognition for phone screen unlock).

   3. Develop an AI Governance Framework  

   Set up a dedicated team or committee that will oversee AI-related risks and compliance. This team should have experts from different areas such as legal, data and technology fields in order to encompass considerations for data privacy, bias mitigation, ethical considerations, and compliance. They will define the rules and standards for AI usage, conduct regular audits, and guide the business on making responsible AI decisions. Suggested steps:  

   

  • Implement monitoring mechanisms to track the performance and behavior of AI systems.
  • Identify any malfunctions that could impact fundamental rights or safety.
  • Ensure ongoing data quality and system accuracy through regular assessments and updates.

   By undertaking these key actions, organizations can not only proactively prepare for compliance with the AI Act, but also effectively manage risks associated with AI systems, ensure transparency, and maintain responsible and ethical AI practices while mitigating risks.  

   A new chance to do it right  

   Through the first wave of AI, we saw how not addressing the risks of this technology could cause great harm and unforeseeable consequences. Algorithms that were intended to help everyone have a voice, connect with friends, and join like-minded communities with social media, also resulted in disinformation, mental issues, addiction and polarization.  

   The AI Act will present compliance challenges for organizations using or providing AI systems within the EU. However, it will also be a great opportunity for many businesses to seize the opportunity presented by the AI Act to safeguard their operations and contribute to the responsible and ethical development of AI.  

   This will encourage organizations to be future-proofed to manage the AI risks in a systematic and efficient manner when working with this innovative technology. It is our new chance to do it right.  

         
 

   Compliance with the EU AI Act may seem like a daunting task, but it doesn’t have to be.  

   Don’t let the complexity of the regulation hold your business back from leveraging the power of AI. With my expertise, you can create a successful AI strategy that aligns with your business objectives and meets the new regulations.  

   Please feel free to reach out here for an initial, no-commitment consultation with Atlan Insights. We can discuss your specific requirements and explore how my expertise can assist you in staying ahead in today’s competitive landscape.  

Book a Free Assessment Today
Reach out and book a 30-min consultation to assess your current AI possibilities for free
Block left Image - Block right text:
AI Free Assessment
30 mins
Free