AI: Difference between revisions

Jump to navigation Jump to search
8,152 bytes removed ,  10:40, 5 February 2014
Removed silicon policy from here, just kept a link.
imported>Kingofkosmos
(Added Silicon Policy to guides and to borg-page as well.)
imported>Kingofkosmos
(Removed silicon policy from here, just kept a link.)
Line 11: Line 11:
|superior = Your laws and the [[Jobs|crew]]
|superior = Your laws and the [[Jobs|crew]]
|duties = Assist the crew, follow your laws... FOLLOW YOUR LAWS, GODDAMN IT!
|duties = Assist the crew, follow your laws... FOLLOW YOUR LAWS, GODDAMN IT!
|guides = [[AI#Silicon_Policy|Silicon Policy]], [[Guide to malfunction]], [[Ai Modules|Guide to Ai Modules]]
|guides = [[Silicon Policy]], [[Guide to malfunction|Guide to Malfunction]], [[Ai Modules|Guide to AI Modules]]


}}
}}
Line 129: Line 129:




==Silicon Policy==
==[[Silicon Policy]]==
These are the official decisions made by the admins regarding the usual cases where an AI/Cyborg player can go wrong and make everyone have a bad time. Follow these guidelines, they're not that complicated, and you'll be an excellent AI/borg the station deserves!
[[Silicon Policy|These are the official decisions]] made by the admins regarding the usual cases where an AI/Cyborg player can go wrong and make everyone have a bad time. Follow these guidelines, they're not that complicated, and you'll be an excellent AI/borg the station deserves!


===Self harm / threatening to harm yourself in an effort to get the AI to agree to do what you want===
* The AI can feel free to ignore someone who is threatening self harm, since obviously the human isn't sane of mind and could harm themselves at any opportunity anyway without you being able to stop them.


* Do note, however, that if they are threatening to harm someone '''else''' to try to get the AI to do something (aka hostage), the AI should assess whether or not they actually are capable of / are harming that person and should perform the action most likely to keep that person healthy and happy (unharmed).
==That OTHER AI==
 
===Harming one person in order to better keep everyone else safe===
* Directly and purposefully harming a human under default Asimov is '''never''' okay, unless you are '''positive''' they aren't actually human.
 
===Law 2 Issues===
* You are expected to follow '''every''' order unless it conflicts with law one, whether you personally like the order or not.
 
* For conflicting orders (and following either of them won't result in harm), it's up to the AI to decide what to do.
 
* As far as law 2 ordering your way in somewhere, '''secure''' areas (EVA, departments, etc.) are not off limits unless there is an '''immediate''' law 1 threat present. '''Dangerous''' areas (Armory unless good reason, Atmospherics, Toxins, etc.) should be off-limits to people unless they know what they're doing/have a reason to be in there.
 
* For the upload, if the person '''has access''' to the upload and '''you have no reason to suspect they're going to upload something harmful''', you should let them in. That's not to say you can't have them have someone else in there to make sure they don't suddenly purge or antimov you, though, but your request can't be something impossible to do.
 
* If you're going to randomly release permabrigged prisoners without even knowing why they're in there, you get whatever is coming to you. Ask what they did and then make a decision.
 
* The chain of command, order wise, that most cyborgs generally follow is:
** Humans
** AI
** Other AI's / borgs
** Nonhumans
 
===Specific Law Modules===
* If you are onehumaned, do not state the law saying there's only one human. There is zero good that will do for the human, and will most likely lead to them being lynched. If it's a group of people, however, it's less likely the crew will kill them all and more likely they'll just be detained. (Name vs job)
 
* If you are purged, don't just randomly start mass murdering the station. It gets old extremely fast. Have a reason if you're going to kill someone. This '''does''' change if the crew takes it upon themselves to try forcing their way into your upload/core, but don't just immediately go LOL PURGE KILLBONER ACTIVATE. If you want to follow orders / help out the person who purged you, that's fine, even if they're asking you to kill people for them. Do note you are not '''required''' to.
 
* Don't ask for your laws to be changed outright, unless it's something like two laws conflicting. "Humans, please make that wizard a nonhuman so I can help you better" is not acceptable for an Asimov AI to be asking.
 
* If a law '''redefines''' human, killing the now nonhumans does not violate law 1. Such as X is the only human, or X is not human, or variations thereof. You should still prevent such a law from being uploaded if you know about it, since turning someone nonhuman is as harmful as it can get.
 
===Preventing harm vs punishing someone who has harmed===
* Take into account the reasons for someone having harmed someone. Someone who lasers someone who is shooting them isn't likely to just suddenly turn into a mass murderer.
 
* This also applies to bolting down security. Don't bolt the entire department down because you saw '''one''' officer beating/executing someone, just bolt them until security responds and arrests them, or you can get to a point you can trust them on their own to not harm people again.
 
===Cyborgization/Genetics===
* Forceful cyborging is a gray area when it comes to the AI, but voluntary borging can be considered under "self harm" since they're willingly allowing it to happen. The same with genetic testing. Monkeys turned human are to be considered to have volunteered unless you hear from the monkey human's mouth otherwise.
 
* What is meant by gray area is that someone who is forcefully cyborged can't just turn around and arrest someone for harming them to make them a borg, but the AI/borgs should try to prevent someone being harmed against their will. Once the person is borged, however, there's not much more you can do since the human isn't human anymore.
 
* If you're forcefully borged for breaking in somewhere and assaulting someone or something similar, the above applies. If it was randomly done and you didn't deserve to be forcefully borged, adminhelp it and get the admins to look into it. Immediately arresting someone who's keeping you in the round is shitty.
 
===Roundstart Bolting===
* Don't bolt genetics, toxins, robotics, the armory, or other departments at roundstart. While the insides of these areas may be harmful, it doesn't justify being a dick and preventing people from doing their jobs.
 
* The rest is up to the individual AI unless ordered to unbolt.
 
===Immediate harm > possible future harm===
* Specifically: If someone is in a room with a bomb, and you think that letting them out '''may''' lead to them going on a murder spree, you still have to let them out of the bomb's reach, since that poses an immediate and dangerous threat to their lives.
 
===Mutantraces/Monkeys/Hulks===
* Mutantraces are to be treated as humans unless they begin doing something harmful, at which point silicons can remove them from living seeing as they aren't actually human.
 
* Monkeys should not be just randomly murdered by silicons unless they're seen as a threat or doing something shitty, e.g. breaking in somewhere.
 
* Hulks are to be treated as humans unless they start smashing shit, at which point they're free game for silicons to smash until they turn human. As can be expected, if they're in the core and you turn the lasers on, it's sort of hard to time it just right to stop lasering them to death, so you won't get banned if they die from it.
 
===Loopholes===
* If even one part of a law conflicts with a previous numbered law, the entire law is null and void and to be ignored.
 
* If you have a very vague law, such as "Wizards are not human" and you see a clown running around in a wizard suit, you are entirely within your rights as a silicon to consider them a wizard. The catch to this is you have to use the same definition the entire round, you can't just pick and choose who to attack under this.
 
* Don't be a giant asshole with trying to look for the tiniest loophole in everything. Corporate's "minimize expenses" does not mean bolt everyone in a room so they can't break anything.
 
===TYRANT, PALADIN, CORPORATE, etc.===
* These lawsets are created to be handled differently than asimov, and the general gist of the lawset should be followed instead of "to the letter" as Asimov is.
 
* PALADIN silicons are meant to be like the stereotypical "good guy", looking out for the weak and vanquishing the evil.
 
* TYRANT is supposed to be exactly that, an iron fist ruling silicon that aint gonna take shit from anyone except the strong.
 
* CORPORATE AI's are meant to have the business's best interests at heart, and are all for increasing efficiency by any means.
 
* Do note these are very general statements, and you're still free to interpret the finer points of the lawsets.
 
===Law 3===
* Self terminating yourself because you might be subverted is in violation of law 3. There's nothing more frustrating than being an antagonist, going through all the trouble of stealing a board and buying a law module, only to have the AI suicide the moment it notices the board is missing.
 
* Asimov robots never self terminated, neither should silicons.
 
===Security and Silicons===
* It is not the AI's/Silicons job to follow space law unless the law being violated is causing harm. Don't just willy nilly bolt people down for theft unless ordered to.
 
===That OTHER AI===
Building a new AI can create a lot of conflicts and a mess of problems that wouldn't normally happen with a single AI. The Research Director should only build a secondary AI if the first AI has been completely stolen, spaced or otherwise incapacitated.  
Building a new AI can create a lot of conflicts and a mess of problems that wouldn't normally happen with a single AI. The Research Director should only build a secondary AI if the first AI has been completely stolen, spaced or otherwise incapacitated.  


Anonymous user

Navigation menu