As intelligent autonomous agents and multiagent system applications become more pervasive, it becomes increasingly important to understand the risks associated with using these systems. Incorrect or inappropriate agent behavior can have harmful - fects, including financial cost, loss of data, and injury to humans or systems. For - ample, NASA has proposed missions where multiagent systems, working in space or on other planets, will need to do their own reasoning about safety issues that concern not only themselves but also that of their mission. Likewise, industry is interested in agent systems that can search for new supply opportunities and engage in (semi-) automated negotiations over new supply contracts. These systems should be able to securely negotiate such arrangements and decide which credentials can be requested and which credentials may be disclosed. Such systems may encounter environments that are only partially understood and where they must learn for themselves which aspects of their environment are safe and which are dangerous. Thus, security and safety are two central issues when developing and deploying such systems. We refer to a multiagent systemas security as the ability of the system to deal with threats that are intentionally caused by other intelligent agents and/or s- tems, and the systemas safety as its ability to deal with any other threats to its goals.Research Results from 2004-2006 Mike Barley, Haris Mouratidis, Amy Unruh, Diana F. Gordon-Spears, Paul Scerri, Fabio MASSACCI ... Generating this suitable 3D model environment for the Navigation mode can require months or even years of manual modeling effort ... control over a decision among multiple entities, for example, an AH1 H2 strategy implies that an agent (AT) attempts a decision and ifanbsp;...
|Title||:||Safety and Security in Multiagent Systems|
|Author||:||Mike Barley, Haris Mouratidis, Amy Unruh, Diana F. Gordon-Spears, Paul Scerri, Fabio MASSACCI|
|Publisher||:||Springer - 2009-09-30|