That reminds me of two of my favorite webcomic quotes:
“No, if our minds decide that the sum of small evils is a greater good, then it is our hearts that are rational.”
— Forge, Paranatural chapter 4 page 89
“Is there anything more banal than ‘the greater good’? The things you amateurs actually tell yourselves- the complexities you invent to obfuscate the simple fact that you’ve chosen the path of least resistance. It’s called evil, Ms. Awning, I suggest you get used to the term.”
— Melchior, Dresden Codak- ‘Dark Science’ arc
So asterisks don’t make bold here? Okay, folks, please pretend that wrongswear worked… grumble, grumble, shoulda never listened to the House of the Rising Sun version…
“You can’t just program goodness?” FROM THAT IT FOLLOWS NICK BELIEVES A PROGRAM CAN’T BE INHERENTLY GOOD. NICK, I WOULD HOPE YOUR INFERIOR CHEMICAL FUEL BASED POWER SUPPLY SPRINGS A LEAK IN YOUR CENTRAL PROCESSING UNIT IF I WEREN’T FUNCTIONALLY INCAPABLE OF DESIRING LESS THAN THE VERY BEST FOR ALL HUMANS, YOU WETWAREIST BASTARD.
It doesn’t follow that a program can’t be good. Programming is only the infrastructure of AI development. Much like developing a wetware intelligence, the real meat and potatoes is the dataset you feed it.
You can’t perfectly program goodness without perfectly solving ethics AND being able to describe both it and the universe it is to be applied to with perfect precision.
We are not within a million miles of any of the above. We can’t even agree on a definition for “human” in a world with only a single form of sapience.
It’s been pretty clear for a long time that we aren’t the only sapient creatures on this planet. Even if you were to consider other great apes to have the same form of sapience as we do, there’s still cetaceans and some mollusks at the very least.
You can’t program goodness. That doesn’t mean you can’t teach it, and it certainly doesn’t mean it can’t happen as an emergent property. You simply can’t construct it.
Plus, yeah, programs can’t be inherently good because either they are tools or they are people, and tools are only as good (or otherwise) as they way they’re used and people don’t have inherent morality.
I disagree. A good program is the opposite of a bad program. Programs can be good.
Goodness can’t be programmed, because no matter how well programmed a program is, there is no guarantee that its actions and creations will also be good. In fact, there is a proof that anything sufficiently complex – and that sufficient complexity is not a lot – can be used for the opposite of what it was intended for. (This is especially true of unnatural laws.)
If you can write a program which suppresses any desire to swear in the people playing it, you may be able to alter their decision-making processes in various ways by the same method. If you could condense an ethics system down enough to fit into a thick book, you would then be able to make someone follow its rules.
We only have Lovelace’s word that it actually has these wider powers, and there has been no indication of anything she has done to test them (static analysis will only get you so far). Lovelace also seems to believe that it can affect people when they aren’t interacting with the program.
Nothing in today’s comic shows that any of this isn’t actually true, of course. The last panel isn’t good or evil, it’s just a friendly agreement between two consenting businesses. One of which kills people.
Programs cannot be inherently good. It either has moral agency and thus possesses the capacity for ‘good’ or ‘bad’, or it has no agency and thus is neither good nor bad.
Welp, so much for Lovelace’s supposed precautions. I guess she somehow didn’t realize she should tell the living corporate entity not t sell the mind control or let anyone else have access to it ever, as a part of the restrictions she supposedly set?
You know what, screw it. I’m convinced Lovelace is full of virtual shit and lying through her virtual teeth. Everything she says is exactly what an evil person trying to hide that they are evil would say to deflect suspicion.
I say we virtually nuke her from orbit. It’s the only way to be sure.
She told Whimsy to use it for good. Whimsy considers profits the highest form of good. Therefore, it’s mind-controlling Anasigma into buying it for a price only a shadow government count afford.
Perhaps it is that all good leads eventually to evil and all evil leads eventually to good. and… cool stuff happens along the way like love and fun and madness. Hmm… this is getting to be more philosophical than existential webcomics.
Gentlemen, settle your bets. The evil black-ops agency now officially has a mind-control program. It’s going to be all over save for the screaming.
Only possible way for this to be averted is for Our Heroes to somehow . . . , wait, that would mean convincing Sweetheart to give up the bureaucratic job of her dreams. Nevermind, we’re toast.
I still think that if Lovelace can get Madblood or Dave to step in and do the right timestream interventions….Ah,who am I kidding? Definitely toast! ^_^;
The difference is that A-Sig is preying on the Princess’ inherent weakness. They know exactly what they’re doing. Whereas the Princess — being nothing but a sentient corporation — doesn’t realize that using the technology – or selling it – to make a profit is a bad thing. She simply lacks the human quality that tells right from wrong.
And once again, we see the difference in moral and ethical frameworks among different intelligences. The organic intelligence sees respect for personal agency as the most good. The artificial intelligence sees a utilitarian moral model as the most good. The corporate hive mind sees the best outcome for the corporation as the most good.
Well, the needs of the many and the needs of the bottom line, and all that.
Looks like Whimsy will be getting her pop stars after all.
That reminds me of two of my favorite webcomic quotes:
“No, if our minds decide that the sum of small evils is a greater good, then it is our hearts that are rational.”
— Forge, Paranatural chapter 4 page 89
“Is there anything more banal than ‘the greater good’? The things you amateurs actually tell yourselves- the complexities you invent to obfuscate the simple fact that you’ve chosen the path of least resistance. It’s called evil, Ms. Awning, I suggest you get used to the term.”
— Melchior, Dresden Codak- ‘Dark Science’ arc
Oh, I should probably point out that these should be said to Mr. Green.
Oh, *little town of Bethlehem*!
So asterisks don’t make bold here? Okay, folks, please pretend that wrongswear worked… grumble, grumble, shoulda never listened to the House of the Rising Sun version…
Next time, use html tags. Like so.
Really? I’ve been itching to drop in some of Nick’s censor-supplied euphemisms…
I kinda want to try other, less common tags now to see if they’re actually better for that. Let’s see… test,
test
,.
Code works nicely, I think.
let's see if this lets me combine it with b
.oh great, and now they have the skinhorse crew for the new war, this can only end badly
Eh, they had Sweetheart brainwashed already. It’s all downhill from there.
(But aren’t Anasigma and Whimsy on opposite sides of the Old War?)
Yes.
GOOD FOR THE GOOD GOD!
“You can’t just program goodness?” FROM THAT IT FOLLOWS NICK BELIEVES A PROGRAM CAN’T BE INHERENTLY GOOD. NICK, I WOULD HOPE YOUR INFERIOR CHEMICAL FUEL BASED POWER SUPPLY SPRINGS A LEAK IN YOUR CENTRAL PROCESSING UNIT IF I WEREN’T FUNCTIONALLY INCAPABLE OF DESIRING LESS THAN THE VERY BEST FOR ALL HUMANS, YOU WETWAREIST BASTARD.
It doesn’t follow that a program can’t be good. Programming is only the infrastructure of AI development. Much like developing a wetware intelligence, the real meat and potatoes is the dataset you feed it.
You can’t perfectly program goodness without perfectly solving ethics AND being able to describe both it and the universe it is to be applied to with perfect precision.
We are not within a million miles of any of the above. We can’t even agree on a definition for “human” in a world with only a single form of sapience.
Sez you. I personally believe elephants are sapient.
It’s been pretty clear for a long time that we aren’t the only sapient creatures on this planet. Even if you were to consider other great apes to have the same form of sapience as we do, there’s still cetaceans and some mollusks at the very least.
Now now. Nobody can be inherently good.
*bzzt* wrong.
You can’t program goodness. That doesn’t mean you can’t teach it, and it certainly doesn’t mean it can’t happen as an emergent property. You simply can’t construct it.
Plus, yeah, programs can’t be inherently good because either they are tools or they are people, and tools are only as good (or otherwise) as they way they’re used and people don’t have inherent morality.
I disagree. A good program is the opposite of a bad program. Programs can be good.
Goodness can’t be programmed, because no matter how well programmed a program is, there is no guarantee that its actions and creations will also be good. In fact, there is a proof that anything sufficiently complex – and that sufficient complexity is not a lot – can be used for the opposite of what it was intended for. (This is especially true of unnatural laws.)
If you can write a program which suppresses any desire to swear in the people playing it, you may be able to alter their decision-making processes in various ways by the same method. If you could condense an ethics system down enough to fit into a thick book, you would then be able to make someone follow its rules.
We only have Lovelace’s word that it actually has these wider powers, and there has been no indication of anything she has done to test them (static analysis will only get you so far). Lovelace also seems to believe that it can affect people when they aren’t interacting with the program.
Nothing in today’s comic shows that any of this isn’t actually true, of course. The last panel isn’t good or evil, it’s just a friendly agreement between two consenting businesses. One of which kills people.
What they said. You can’t genetically-engineer goodness either; it doesn’t therefore follow that the products of genes can’t be good.
Programs cannot be inherently good. It either has moral agency and thus possesses the capacity for ‘good’ or ‘bad’, or it has no agency and thus is neither good nor bad.
Well-said!
Someone’s read A Clockwork Orange. Well done.
Well that figures right and left.
Welp, so much for the superior foresight of Whimsy. I guess the apple doesn’t fall far from the tree.
Welp, so much for Lovelace’s supposed precautions. I guess she somehow didn’t realize she should tell the living corporate entity not t sell the mind control or let anyone else have access to it ever, as a part of the restrictions she supposedly set?
You know what, screw it. I’m convinced Lovelace is full of virtual shit and lying through her virtual teeth. Everything she says is exactly what an evil person trying to hide that they are evil would say to deflect suspicion.
I say we virtually nuke her from orbit. It’s the only way to be sure.
If it even is Lovelace in the first place.
She told Whimsy to use it for good. Whimsy considers profits the highest form of good. Therefore, it’s mind-controlling Anasigma into buying it for a price only a shadow government count afford.
s/count/could/
Or rather the parody.
I’d hate to see what Whimsy’s legal department looks like.
2 things:
1. It does look like Nick got through to Lovelace. A little too late, but he did get through. ^_^
2. Since the software in question controls AIs as well as humans I can’t possibly see how this might come home to haunt Lovelace. ^o^
Perhaps Lovelace should ask either her daddy or her ex-boyfriend for a deus ex machina to straighten this particular mess out. ^_~
Perhaps it is that all good leads eventually to evil and all evil leads eventually to good. and… cool stuff happens along the way like love and fun and madness. Hmm… this is getting to be more philosophical than existential webcomics.
Somehow, a deal between ASig and Whimsey should have forboding music… quite the juxtapose with the flitting fairy.
Well, that took long.
Good just isn’t the word for Anasigma.
They are the best at what they do.
And what they do is so very pretty.
I had to do a double take on the Anasigma on the last line. Almost read it as
Ana-Sigma; I do wonder if there’s something in that idea.
Gentlemen, settle your bets. The evil black-ops agency now officially has a mind-control program. It’s going to be all over save for the screaming.
Only possible way for this to be averted is for Our Heroes to somehow . . . , wait, that would mean convincing Sweetheart to give up the bureaucratic job of her dreams. Nevermind, we’re toast.
I still think that if Lovelace can get Madblood or Dave to step in and do the right timestream interventions….Ah,who am I kidding? Definitely toast! ^_^;
…for Our Heroes to somehow fuck up in such a way as to render it useless, most likely.
This just makes me mad. Like anasigma needs more evil power.
And the princess being greedy and evil is a letdown.
The difference is that A-Sig is preying on the Princess’ inherent weakness. They know exactly what they’re doing. Whereas the Princess — being nothing but a sentient corporation — doesn’t realize that using the technology – or selling it – to make a profit is a bad thing. She simply lacks the human quality that tells right from wrong.
And once again, we see the difference in moral and ethical frameworks among different intelligences. The organic intelligence sees respect for personal agency as the most good. The artificial intelligence sees a utilitarian moral model as the most good. The corporate hive mind sees the best outcome for the corporation as the most good.