It is unlikely to be at the hands of human-shaped
forms like these, with recognisably human motivations.
上面一句出自2020高考英语北京卷阅读D材料。请问这里为什么用副词recognisably而不用形容词recognisable?
附全文供参考:
Certain forms of AI are indeed becoming ubiquitous. For example, algorithms (算法) carry out huge volumes of trading on our financial markets, self-driving cars are appearing on city streets, and our smartphones are translating from one language into another. These systems are sometimes faster and more perceptive than we humans are. But so far that is only true for the specific tasks for which the systems have been designed. That is something that some AI developers are now eager to change.
Some of today’s AI pioneers want to move on from today’s world of “weak” or “narrow” AI, to create “strong” or “full” AI, or what is often called artificial general intelligence (AGI). In some respects, today’s powerful computing machines already make our brains look weak. AGI could, its advocates say, work for us around the clock, and drawing on all available data, could suggest solutions to many problems. DM, a company focused on the development of AGI, has an ambition to “solve intelligence”. “If we’re successful,” their mission statement reads, “we believe this will be one of the most important and widely beneficial scientific advances ever made.”
Since the early days of AI, imagination has outpaced what is possible or even probable. In 1965, an imaginative mathematician called Irving Good predicted the eventual creation of an “ultra-intelligent machine … that can far surpass all the intellectual (智力的) activities of any man, however clever.” Good went on to suggest that “the first ultra-intelligent machine” could be “the last invention that man need ever make.”
Fears about the appearance of bad, powerful, man-made intelligent machines have been reinforced (强化) by many works of fiction—Mary Shelley’s Frankenstein and the Terminator film series, for example. But if AI does eventually prove to be our downfall, it is unlikely to be at the hands of human-shaped forms like these, with recognisably human motivations such as aggression (敌对行为). Instead, I agree with OxfordUniversity philosopher Nick Bostrom, who believes that the heaviest risks from AGI do not come from a decision to turn against mankind but rather from a dogged pursuit of set objectives at the expense of everything else.
The promise and danger of true AGI are great. But all of today’s excited discussion about these possibilities presupposes the fact that we will be able to build these systems. And, having spoken to many of the world’s foremost AI researchers, I believe there is good reason to doubt that we will see AGI any time soon, if ever.