This year marks exactly two centuries since the publication of Frankenstein; or, The Modem Prometheus, by Mary Shelley. Even before the invention of the electric light bulb, the author produced a remarkable work of speculative fiction that would foreshadow many ethical questions to be raised by technologies yet to come.
Today the rapid growth of artificial intelligence (AI) raises fundamental questions: “What is intelligence, identity, or consciousness? What makes humans humans?”
What is being called artificial general intelligence, machines that would imitate the way humans think, continues to evade scientists. Yet humans remain fascinated by the idea of robots that would look, move, and respond like humans, similar to those recently depicted on popular sci-fi TV series such as “Westworld” and “Humans”.
Just how people think is still far too complex to be understood, let alone reproduced, says David Eagleman, a Stanford University neuroscientist. “We are just in a situation where there are no good theories explaining what consciousness actually is and how you could ever build a machine to get there.”
But that doesn’t mean crucial ethical issues involving AI aren’t at hand. The coming use of autonomous vehicles, for example, poses thorny ethical questions. Human drivers sometimes must make split-second decisions. Their reactions may be a complex combination of instant reflexes, input from past driving experiences, and what their eyes and ears tell them in that moment. AI “vision” today is not nearly as sophisticated as that of humans. And to anticipate every imaginable driving situation is a difficult programming problem.
Whenever decisions are based on masses of data, “you quickly get into a lot of ethical questions,” notes Tan Kiat How, chief executive of a Singapore-based agency that is helping the government develop a voluntary code for the ethical use of AI. Along with Singapore, other governments and mega-corporations are beginning to establish their own guidelines. Britain is setting up a data ethics center. India released its AI ethics strategy this spring.
On June 7 Google pledged not to “design or deploy AI” that would cause “overall harm”, or to develop AI-directed weapons or use AI for surveillance that would violate international norms. It also pledged not to deploy AI whose use would violate international laws or human rights.
While the statement is vague, it represents one starting point. So does the idea that decisions made by AI systems should be explainable, transparent, and fair.
To put it another way: How can we make sure that the thinking of intelligent machines reflects humanity’s highest values? Only then will they be useful servants and not Frankenstein’s out-of-control monster.
Mary Shelley’s novel Frankenstein is mentioned because it _____.
- A.fascinates AI scientists all over the world
- B.has remained popular for as long as 200 years
- C.involves some concerns raised by AI today
- D.has sparked serious ethical controversies
正确答案及解析
正确答案
解析
事实细节题。由题干中的Mary Shelley和Frankenstein可定位至第一段第一句。该句指出,今年是玛丽·雪莱著作《弗兰肯斯坦》出版整整两百周年;紧接着第二句又提到,该小说预示了当时尚未出现的科技将会引起的诸多道德问题。浏览全文可知,本文主要在讨论当今社会人工智能及其可能引发的问题,故首段提及玛丽·雪莱的小说是为了引出文章的主旨,C选项正确。
包含此试题的试卷
你可能感兴趣的试题
在社会规范学习与道德品质发展的研究中,班都拉(ABandura)等心理学家的研究重点是
-
- A.道德认识
- B.道德情感
- C.道德意志
- D.道德行为
- 查看答案
与悬浮-密实结构的沥青混合料相比,关于骨架-空隙结构的黏聚力和内摩擦角的说法,正确的是( )。
-
- A.黏聚力大,内摩擦角大
- B.黏聚力大,内摩擦角小
- C.黏聚力小,内摩擦角大
- D.黏聚力小,内摩擦角小
- 查看答案
关于企业法人对其法定代表人行为承担民事责任的下列哪一表述是正确的
-
- A.仅对其合法的经营行为承担民事责任
- B.仅对其符合法人章程的经营行为承担民事责任
- C.仅对其以法人名义从事的经营行为承担民事责任
- D.仅对其符合法人登记经营范围的经营行为承担民事责任
- 查看答案
沥青混合料结构组成中,骨架-空隙结构的特点是( )。
-
- A.黏聚力较高,内摩擦角较小
- B.黏聚力较高,内摩擦角较大
- C.黏聚力较低,内摩擦角较大
- D.黏聚力较低,内摩擦角较小
- 查看答案
柔性路面主要代表是沥青类路面,其破坏主要取决于( )和极限垂直变形。
-
- A.剪切变形
- B.抗剪强度
- C.弯拉强度
- D.弯拉应变
- 查看答案