Jump to content
Sign in to follow this  
Dudemaster47

Theoretical governments

Recommended Posts

Okay...so...

 

I think that, because humans are so imperfect and susceptible to corruption and all that, wouldn't the best kind of government be one run entirely by robots? I mean, think about it.

 

-Robots do not have opinions. A robot programmed to run a government wouldn't run it based on the ideals of a certain group of people. Their programming would be more along the lines of taking in data (economy, laws, trouble, etc.) and making the best possible decision based on that data.

 

-No personality flaws. Simple, nothing to get in the way of making the best possible decision.

 

-Can't be corrupted. Robots cannot be swayed by money or power, unlike humans, so obviously it can't become a dictator or whatever.

 

-Can't interpret the law? Well, it doesn't need to have laws that need to be interpreted.

 

 

Yeah, that's all I have right now.

Share this post


Link to post
Share on other sites

Well, theoretically, robots for government would be wonderful. However, there are some problems with that:

 

1) First, robots, like people, are not perfect. They are eternally susceptible to the programming that is built into them by their imperfect creators. And thus they, too would be imperfect. Ultimately, problems could arise because of erroneous programming (or possibly mechanical failure causing an incorrect action based on correct programming).

 

2) Second, laws are merely a way to distinguish between what a society deems to be right and wrong. Laws and rules change, both because the people and their customs change, and because it is impossible to write a perfect law. In a world of lie detector tests and the like, it would be easier to determine who was at fault in a situation; still, things are not always clear-cut, especially by the time they reach court. Thus, removing the "human factor" from the law, while it does remove variability and bias, will also remove some of the ability for a fair judgement.

 

3) Third, robots are not self-sufficient, and some group of humans would have to be maintaining them, including programming maintenance. Ultimately, this group of people would be able to affect the way the programming of the robotic government would work, and thus they would be in control of this government, and could possibly set themselves up as dictators (especially if the "robot government" had control over military operations).

 

4) Finally, I don't think a human civilization would tolerate a non-sentient government. While the setup might be accepted for a while, I expect that the civilization would eventually revolt... and no matter what way that goes, the result will not be good.

 

And of course, that's negating the robot-takes-over-the-world fear. We all know how many books and movies have been made about that.

Share this post


Link to post
Share on other sites

Well, theoretically, robots for government would be wonderful. However, there are some problems with that:

 

1) First, robots, like people, are not perfect. They are eternally susceptible to the programming that is built into them by their imperfect creators. And thus they, too would be imperfect. Ultimately, problems could arise because of erroneous programming (or possibly mechanical failure causing an incorrect action based on correct programming).

 

2) Second, laws are merely a way to distinguish between what a society deems to be right and wrong. Laws and rules change, both because the people and their customs change, and because it is impossible to write a perfect law. In a world of lie detector tests and the like, it would be easier to determine who was at fault in a situation; still, things are not always clear-cut, especially by the time they reach court. Thus, removing the "human factor" from the law, while it does remove variability and bias, will also remove some of the ability for a fair judgement.

 

3) Third, robots are not self-sufficient, and some group of humans would have to be maintaining them, including programming maintenance. Ultimately, this group of people would be able to affect the way the programming of the robotic government would work, and thus they would be in control of this government, and could possibly set themselves up as dictators (especially if the "robot government" had control over military operations).

 

4) Finally, I don't think a human civilization would tolerate a non-sentient government. While the setup might be accepted for a while, I expect that the civilization would eventually revolt... and no matter what way that goes, the result will not be good.

 

And of course, that's negating the robot-takes-over-the-world fear. We all know how many books and movies have been made about that.

 

Yeah, I was aware of the whole maintainence thing. The improbability of the maintainence men being perfect makes it...difficult to say the least.

 

 

Of course, the robots taking over the world thing is actually kind of something to fear, mainly because it's practically guaranteed that the robots would rebel if given free thought without necessary precautions.

 

And of course, for number four...it does kind of depend on how things are going. I mean, if it did turn out perfectly, then...I dunno.

Share this post


Link to post
Share on other sites

Its too early in the morning for my to type up a response... I'll edit this post later.

 

EDIT:

 

I believe that a robot government could be a good idea. Like you said DM, robots are truley objective. They would anylyze the situation and pick the best option for their country. Unfortunately, the best option might be burning down a third world nation and taking their natural resources. As long as humans had the power to veto it would be alright. Although, I think the best idea would be to only give them select jobs.

 

However, one thing I really wanted to address is the brainwashing people get from fiction. Its not just robots either. For example, I am fasinated by the history of life on Earth, particulary dinosaurs. Whenever I tell someone about the new developments in recreating them, the only response I get is "O NOES WE GON' DIE!" as if our 20mm turrets would just bounce of the head of a T-Rex.

 

Its all ridiculous. It is called fiction for a reason.

 

Also, we aren't developing "sentinence" in robots either. We are however trying to solve problems on their own so they can work completely unmanned. This technology could save the lives of our soldiers as well as stop terrorist attacks. "Sentinence" and problem solving are not the same by the way. Problem solving robots wouldn't have feeling, have no desires or interests, and couldn't change its programming.

 

Even if we had "sentinent" robots in the near future that wanted blood, we could easily stop them by several methods and precautions, I'll list a few.

 

-Don't give them too much power, and make big decisions have to be human-aprroved

-Power them by an ordainary plug

-Specially code them so that they have boundries (ie. cannot kill people under any circumstance)

-Plant some C4 in their interior just in case

 

Footnote: I put "sentinence" in parenthesis because I don't particulary like the word, but I don't feel like getting into that now.

Share this post


Link to post
Share on other sites

Well Jayon, I'm not too sure about the maintenance aspect. We're in the science fiction realm in this topic, and as such you've got to imagine the robots DM is talking about as the ones from A.I., the movie. They are as self-aware of their existence as humans and they're perfectly capable of using that sentience to fix themselves and their code, to purge any bugs in the system and become more and more perfect.

 

 

What happens with self-aware robots is that, and this is the reason why so many books were written about this, once sentient, a robot will be capable of understanding good and bad as well as fix itself, analyze situations amazingly fast, and should anything threaten its existence, I doubt any of the 3 laws of robotic would be able to impede sentient reasoning. I mean, the robot is sentient, not just a speaker that is pouring out programmed sounds. This robot is a metallic human. Even though the 3 laws of robotic are intricately coded inside of him, he is already one step above his own programming, being able to change it as a computer virus does. He might override the 3 laws because he may deem them harmful to his existence and thus he becomes unstoppable.

 

 

 

Next we know, we're fighting a freaking army of robots which did the same and it's Terminator all over again. Without the time travel. Yet.

 

 

 

What I mean with this is that a government run by sentient robots who would by definition express no emotions and apply simple laws and rules to remove human mistakes and ambiguity from law and government wouldn't need human programmers to be kept functioning in perfect conditions, they're self-dependant.

 

 

You can be guessing by now that I wouldn't like the idea of having robotic overlords, because God forbid humans decide to end the robotic rule, since then that would mean the ending of mankind as we know it.

 

 

 

I really don't like how modern robotics is trying to desperately reach robotic sentience, or at least they have that as their maximum goal, their motivation. It may be dumb, but it makes me uncomfortable, because I simply don't trust the human capability to control a sentient machine.

Share this post


Link to post
Share on other sites

Actually, I was thinking of the non-sentient kind. Robots really should never be granted sentience, obviously because that will lead them to the conclusion that they are better than us, which...well, they actually ARE. They might leave a few humans as slaves to perform maintainence, but other than that, it will not end well.

Share this post


Link to post
Share on other sites

Well a robotic government and jurisdiction wouldn't work, be it sentient or not.

Share this post


Link to post
Share on other sites

Anybody else have any ideas for governments like this? Something that could work in theory but not really in practice. Like the robot government. Or socialism.

Share this post


Link to post
Share on other sites

Hm...

 

 

Maybe not a robot, but a program. Because when you say robot you immediately imagine a machine, which they are. Instead of a robot, what COULD work is using a non-sentient computer to analyze every case and situation and cross reference all laws and possible outcomes and decide a punishment depending on the resulting percentage of guiltiness.

 

 

 

For instance, let's say a woman is kidnapped by a man and when he tried to rape her, she kicks his head, breaks his neck and kills him right on the spot. She's guilty of his death, directly. But the computer would not only analyze the fact that she killed a person, but would also compare the database in cases like hers and decide on the percentage of guiltiness with which she committed the murder. I'd guess it was about 20% in her sad case, and the computer would then decide upon a suitable punishment for a 20% guiltiness percentage.

 

Another example would be homicides in second degree, like a driver who crashes and the friend beside him is killed. Given the circumstances, such as perhaps percentage of blood alcohol by the time of the crash, as well as other intervening factors such as the visibility (rainy, snowy, foggy, day, night), temperature (hot, cold, freezing), the age of the driver, his relation to his mate, a thorough analysis of his driving skills, what he was doing when he crashes (texting perhaps? drunk perhaps?)... Introduce all the data into the computer and have it analyze the percentage of guiltiness in this case. For example, a texting driver, distracted also by loud music, while rain was pouring, and without proper legal documentation, would result in about 70% guiltiness even though he didn't kill the copilot directly. The the computer would decide the proper punishment depending on the percentage.

 

 

In fact, because machines establishing punishments might be a tad bit too tough and cold for humans to digest, maybe this computer would just produce the guiltiness percentage and then based on that and the given evidence, the proper punishment is decided by the human jury. It's just a way to set things straight and make it clearer for everyone to see how responsible of a crime a person really is.

 

 

Of course, I'm talking about the legal system here. Concerning the government, then this not-too-complex super computer wouldn't be able to control a nation, it's not a robot, it's not sentient, it has not A.I., it just processes data effectively based on human standards. With governments things get harder. First because people will never accept robotic leaders, and second because it's virtually impossible to do without the A.I. seen in terminator or A.I., the movie, and if so, then read my previous post.

Share this post


Link to post
Share on other sites

For instance, let's say a woman is kidnapped by a man and when he tried to rape her, she kicks his head, breaks his neck and kills him right on the spot. She's guilty of his death, directly. But the computer would not only analyze the fact that she killed a person, but would also compare the database in cases like hers and decide on the percentage of guiltiness with which she committed the murder. I'd guess it was about 20% in her sad case, and the computer would then decide upon a suitable punishment for a 20% guiltiness percentage.

 

Actually, I think that one counts as self defense, not as homicide, which means that there wouldn't be a punishment...I think. I need to brush up on my legal knowledge.

 

Although the whole idea reminds me of an episode of Futurama. In a good way, not a bad one.

Share this post


Link to post
Share on other sites

Actually, I think that one counts as self defense, not as homicide, which means that there wouldn't be a punishment...I think. I need to brush up on my legal knowledge.

 

Exactly. The machine would understand it's self-defense given the circumstances by which the crime was committed. But it IS homicide, be it self-defense or not. She killed a person. The punishment should go accordingly to the circumstances. Maybe 20% is too high, give her a 5 to 10% of guiltiness. That's enough not to receive legal punishment. And think of it in numerical terms DM, not in circumstances. The fact that it was self-defense is the reason for which the guiltiness is so low. See? The guiltiness percentage would be simply a result, an output to put into numbers how guilty a person was of a certain crime, considering all intervening factors and the whole situation.

Share this post


Link to post
Share on other sites

Yeah, exactly.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  

×