Artificial intelligence is progressing exponentially in all areas, from the professional sphere to the private lives of users. In the enterprises, employees can use it during their online meetings. What are the challenges of AI in terms of videoconferencing security? ? Jean-Philippe Commeignes' response, commercial director of Tixéo, French champion of secure video collaboration.
Artificial intelligence is coming to online meetings. Some videoconferencing players now offer virtual assistant functions, based on generative artificial intelligence with the aim of improving user productivity. Integrated into the solution, the virtual assistant can transcribe, translate, subtitle the discussions of an online meeting or even make synthetic textual summaries, as is the case with ChatGPT which uses the large language model. AI also has the ability to base its training on a wide variety of data. This allows it to understand and reproduce human language very precisely.. The more data this model will process, the more efficient it will be and will provide new answers. However, in the context of a business, the data exchanged during videoconferences may relate to intellectual property or contain personal data of employees. When this data passes through a virtual assistant, the question of confidentiality arises.
Sensitive data may be exposed
In a report published by Cyberhaven in February 2023, sensitive data represents 11% of what employees enter in ChatGPT. Without knowing, however, whether AI provides guarantees regarding the protection of the data communicated, researchers warned in 2021 of “training data extraction attacks” by questioning the AI about data that it has potentially learned. In the context of videoconferencing that integrates generative AI, the chain of contextual queries allows you to obtain data such as recurrence, participants or topics shared during the meeting. In its 2023 edition on the impact of artificial intelligence for employees, OECD finds that 57% of European employees in the finance and manufacturing sectors are worried about the protection of their privacy. The reason for their concern is that AI can collect more data than humans or other technologies. A fear that is justified in the case of sensitive meetings, like Codir or Comex.
Ways to further secure communications with AI
Within the European Union, the General Data Protection Regulation (RGPD) guarantees the protection of personal data and imposes obligations on the entities that process them. With AI, rights for data collection and processing apply in particular dimensions. The system must guarantee transparency and access to information, the correction, deletion and restriction of processing. It is in the continuity of this logic that in 2021 the AI Act appeared in order to provide a first regulation in terms of artificial intelligence. So, she specifies that “initiatives such as the EU cybersecurity strategy, digital services legislation and digital markets legislation, as well as the data governance law, provide the appropriate infrastructure for the implementation of such systems”.
A need for effective safeguards
For videoconferencing solutions incorporating generative AI, a high level of security of these solutions is essential to preserve the confidentiality of the data exchanged. Videoconferencing and its AI mechanisms must be subject to firm data protection regulations, like GDPR. On the side of videoconferencing solution publishers, the latter must demonstrate transparency and give clear guarantees to users on the use of their data. Note that when these solutions do not include a real end-to-end encryption system, the protection of the communications they transmit is never ensured.