ISSN :2582-9793

Superintelligence Safety: A Requirements Engineering Perspective

Original Research (Published On: 25-Mar-2023 )
Superintelligence Safety: A Requirements Engineering Perspective
DOI : 10.54364/AAIML.2023.1156

Hermann Kaindl and Jonas Ferdigg

Adv. Artif. Intell. Mach. Learn., 3 (1):947-957

Hermann Kaindl : TU Wien

Jonas Ferdigg : TU Wien

Download PDF Here

DOI: 10.54364/AAIML.2023.1156

Article History: Received on: 22-Feb-23, Accepted on: 20-Mar-23, Published on: 25-Mar-23

Corresponding Author: Hermann Kaindl


Citation: Hermann Kaindl (2023). Superintelligence Safety: A Requirements Engineering Perspective. Adv. Artif. Intell. Mach. Learn., 3 (1 ):947-957



Under the headline “AI safety”, a wide-reaching issue is being discussed, whether in the future some “superhuman artificial intelligence” / “superintelligence” could could pose a threat to humanity. In addition, the late Steven Hawking warned
that the rise of robots may be disastrous for mankind. A major concern is that even benevolent superhuman artificial
intelligence (AI) may become seriously harmful if its given goals are not exactly aligned with ours, or if we cannot specify
precisely its objective function. Metaphorically, this is compared to king Midas in Greek mythology, who expressed the wish that
everything he touched should turn to gold, but obviously this wish was not specified precisely enough. In our view, this sounds
like requirements problems and the challenge of their precise formulation. (To our best knowledge, this has not been pointed
out yet.) As usual in requirements engineering (RE), ambiguity or incompleteness may cause problems. In addition, the overall
issue calls for a major RE endeavor, figuring out the wishes and the needs with regard to a superintelligence, which will in
our opinion most likely be a very complex software-intensive system based on AI. This may even entail theoretically defining
an extended requirements problem.


   Article View: 591
   PDF Downloaded: 14