Regulation of artificial intelligence
Artificial intelligence, its research and the exploitation of its potential have long occupied humanity. However, there has always been a serious obstacle to the explosive development of AI, and that lies in its (ethical) regulation. After many years of consultation, the High-Level Expert Group on AI presented Ethics Guidelines for Trustworthy Artificial Intelligence outlined the guidelines.
According to published guidelines, artificial intelligence:
- lawful: respects the laws and various regulations
- ethical: adheres to ethical principles and values
- robust: technically, taking into account the social environment
In addition to the above guidelines, 7 key areas of requirements have been identified, which are:
- human agency and oversight
- technical Robustness and safety
- privacy and data governance
- diversity, non-discrimination and fairness
- societal and environmental well-being
To check the above requirements, an evaluation list has also been developed in order to determine the extent to which the generated artificial intelligence meets the conditions. The adopted document also contains practical instructions for carrying out the measurement procedure. Practical testing of the evaluation list is still ongoing.
Thus, the development of a regulatory framework for ethical artificial intelligence is already at a very advanced stage, but it can also be stated that there is still a long way to go for professionals. In addition, it is likely that the development of regulation will be an ongoing process, as the propagation of artificial intelligence will be more and more challenging. And we can only hope that solely such ethical solutions will be brought into practice that put artificial intelligence at the appropriate service for humanity.
Learn more about which areas of text analytics we use artificial intelligence.
Our blog post was based on the study by the European Commission.