Uh, Nick means an 8-bit integer. An unsigned 256-bit integer can hold values up to 2^256-1. (~1.157920*10^77.) Overflowing that would take a lot of stegosaurus abuse.
Was gonna say pretty much the same thing. Unsigned at that also, otherwise it’d go from 127 to -0 since that’s what happens when unsigned bitwise math acts on a signed int, it tries to clock up from 127 to 128 and 11111110 becomes 00000001 which is 128 unsigned, but zero with a negative flag signed since a signed int uses the high bit as the sign flag. Of course assuming a small-endian architecture. Otherwise everything would be mirrored.
127 is represented as 01111111 in binary. Add 1 and you get 10000000. Same ordering as in decimal. If the sign bit (leftmost) is set, you subtract 256 to find the value represented, so 10000000 represents -128.
The reason for doing it this way is that you can user a single adder for both signed and unsigned arithmetic, and don’t have to special-case the sign bit.
Endianness is about how small fields are put together to make larger ones, and isn’t really relevant here.
meh whatever, something else I got back to front, story of my life. Though the binary was written that way on purpose, it takes too damn long to evaluate the other way round for me. If i’d ever had to use it in the 21 years since I learned about how binary, endianness (btw it most definitely applies to bits on a serial level) and it’s mathematics work I might have actually kept it straight but even hacking 68K assembler in my teens was all hex which is a whole nother can of worms I avoid when I can because thinking in base-16 gives me an even bigger headache than having to read LTR to sum binary written the way everyone else calls proper. And the only thing I ever even used binary for was calculating subnet masks.
You only go from 127 to -0 if the computer is using a signed-magnitude representation. Pretty much no modern computers ever do this for integers (though it’s a different story for floating-point). Almost all computers use two’s complement for signed integers, in which case you wrap from 127 to -128.
And yes, considering that most modern computers are 64-bit at best, it’s very strange to hear Nick talking about a “simple” 256-bit integer (and Lovelace not knowing better herself, even though she’s talking about an 8-bit unsigned integer).
“considering that most modern computers are 64-bit at best”
That’s in the real world (or the approximation to it that we inhabit). In the Skin-Horse universe, who knows what’s standard or simple? If we really wanted to be pedantic about it, maybe the jargon’s evolved differently, and “256 bit” means “A binary digit representation of 256 ordinal values”.
In fact, what really surprises me is that they’re using binary storage in computers rather than full analogue storage 🙂
Digital is a lot faster, cheaper, and less error-prone, and quickly exceeds measuring precision of analogue parts.
Analog has only an advantage in purely mechanical computers (such as Mustachio), which are invariably slower than electronic, and even then only for some operations. (Granted, there are some operations that electronic analogue circuits can do simpler than digital, but they are still less precise.)
I would expect the madfolk to use optical quantum resonance field processors where applicable, and Whimsy is old tech.
Eh, on my 0-255 scale of ridiculous things that I’ve seen non-computer-professionals write about computers in various forms of fiction, confusing the bit length of a variable with its maximum storable value is maybe a 3.
At one point in Dan Brown’s Digital Fortress, I literally yelled, “COMPUTERS DO NOT WORK LIKE THAT!” at the book.
I guess they just put it on a sliding scale, where 0 is absolute devotion and 255 is absolute murderous rage, and then 127-128 is Lukewarm Indifference.
I’ve actually encountered this in games before. In one particularly rememberable BBS space trading game, the alignment of whether you were good or evil was a single signed 2 byte word, and it would flip if you gained to much notoriety or infamy passed 16,000.
“Why Gandhi Is Such An A**hole In Civilization” from Kotaku http://kotaku.com/why-gandhi-is-such-an-asshole-in-civilization-1653818245
I think they are trying to reference this. Aggression in Civ I was a scale from 0 (peaceful) to 11 (warmonger) but the scale used was 0 to 255 (simple 256 charters). Normally, you use this same scale as -127 to 127, though any 256 range will theoretically work (1000 to 1256)
Now this led to a comical problem; Gandhi used 1 for his aggression, but when democracy was researched, everyone drops aggression by 2; Gandhi went to -1
The scale is 0 to 255, there is no -1, so it rolled over and went to 255, which was 25x more warmonger than Khan! He instantly declared war on EVERYONE.
Er, I’m afraid you’ve messed up. It’s 8-bit integers that have a range from 0 to 255, for a total of 2^8 = 256 values.
A 256-bit integer would be enormous. Most computers these days use 64-bit numbers as their basic units, and those can take 2^64 different values, or between 0 and 18 billion billion. A 256-bit int would be for storing numbers like “How many atoms are in the nearest million galaxies”
This reminds me of India in the first Civilization game. Their default aggression level was close to zero, but there was no way for the game to represent a value lower than zero–meaning that, if circumstances in the game caused their aggression to drop below zero, the game would just circle the aggression level all the way up to ten, which is basically “drop a nuke on anyone who disagrees with you about anything.” So people playing the first Civilization got to witness the bizarre sight of Mahatma Gandhi threatening them with nuclear weapons–and then following through on said threats!
Uh, Nick means an 8-bit integer. An unsigned 256-bit integer can hold values up to 2^256-1. (~1.157920*10^77.) Overflowing that would take a lot of stegosaurus abuse.
Was gonna say pretty much the same thing. Unsigned at that also, otherwise it’d go from 127 to -0 since that’s what happens when unsigned bitwise math acts on a signed int, it tries to clock up from 127 to 128 and 11111110 becomes 00000001 which is 128 unsigned, but zero with a negative flag signed since a signed int uses the high bit as the sign flag. Of course assuming a small-endian architecture. Otherwise everything would be mirrored.
Uh… that explanation seems to be a bit confused.
127 is represented as 01111111 in binary. Add 1 and you get 10000000. Same ordering as in decimal. If the sign bit (leftmost) is set, you subtract 256 to find the value represented, so 10000000 represents -128.
The reason for doing it this way is that you can user a single adder for both signed and unsigned arithmetic, and don’t have to special-case the sign bit.
Endianness is about how small fields are put together to make larger ones, and isn’t really relevant here.
meh whatever, something else I got back to front, story of my life. Though the binary was written that way on purpose, it takes too damn long to evaluate the other way round for me. If i’d ever had to use it in the 21 years since I learned about how binary, endianness (btw it most definitely applies to bits on a serial level) and it’s mathematics work I might have actually kept it straight but even hacking 68K assembler in my teens was all hex which is a whole nother can of worms I avoid when I can because thinking in base-16 gives me an even bigger headache than having to read LTR to sum binary written the way everyone else calls proper. And the only thing I ever even used binary for was calculating subnet masks.
You only go from 127 to -0 if the computer is using a signed-magnitude representation. Pretty much no modern computers ever do this for integers (though it’s a different story for floating-point). Almost all computers use two’s complement for signed integers, in which case you wrap from 127 to -128.
And yes, considering that most modern computers are 64-bit at best, it’s very strange to hear Nick talking about a “simple” 256-bit integer (and Lovelace not knowing better herself, even though she’s talking about an 8-bit unsigned integer).
“considering that most modern computers are 64-bit at best”
That’s in the real world (or the approximation to it that we inhabit). In the Skin-Horse universe, who knows what’s standard or simple? If we really wanted to be pedantic about it, maybe the jargon’s evolved differently, and “256 bit” means “A binary digit representation of 256 ordinal values”.
In fact, what really surprises me is that they’re using binary storage in computers rather than full analogue storage 🙂
Digital is a lot faster, cheaper, and less error-prone, and quickly exceeds measuring precision of analogue parts.
Analog has only an advantage in purely mechanical computers (such as Mustachio), which are invariably slower than electronic, and even then only for some operations. (Granted, there are some operations that electronic analogue circuits can do simpler than digital, but they are still less precise.)
I would expect the madfolk to use optical quantum resonance field processors where applicable, and Whimsy is old tech.
What John said. Eight bits
Shave and a haircut, 256 bits.
That’s five coin!
Looks like a job for Hitty.
That means… agression was over nine thousand
I mean nine duovigintillion
Wow, it’s almost as though I don’t know what the hell I’m talking about. Obviously that can’t be the case, right?
…will fix.
Eh, on my 0-255 scale of ridiculous things that I’ve seen non-computer-professionals write about computers in various forms of fiction, confusing the bit length of a variable with its maximum storable value is maybe a 3.
At one point in Dan Brown’s Digital Fortress, I literally yelled, “COMPUTERS DO NOT WORK LIKE THAT!” at the book.
Ah, the ghandi code from Civ….
I can totally identify with what Lovelace says in panel 3 to an uncomfortable extent.
Rather creeped me out how much I laughed at that.
I don’t know what would worry me more. That aggro and affection are encoded in the same field, or that that field is an unsigned char.
What worries me the most, of course, is that it’s done without overflow checks.
Was this VR written in FORTRAN?
I guess they just put it on a sliding scale, where 0 is absolute devotion and 255 is absolute murderous rage, and then 127-128 is Lukewarm Indifference.
But affection and aggression should be orthogonal dimensions!
Otherwise you get omnicidal mania on one end, and monomanic obsession on the other, with an instability in the middle!
It’s a byproduct of mad science. Basic error checking and handling is against the rules.
Also, a basic rule of software engineering: “never test for an error condition you don’t know how to handle”.
Well, it isn’t that difficult to code a saturating counter, so I’m not entirely sure how much that rule applies 😛
It could be worse, it could be like the Lorewalker Cho/Divine Spirit thing from Hearthstone: https://www.youtube.com/watch?v=SOD7Ni_3NIc
To clarify, I mean the consequences of the bug could be worse.
Careful, he’s still on the cusp! One hug could send Boney into a killing rage again!
well, that explains nick’s being less sweary.
I’ve actually encountered this in games before. In one particularly rememberable BBS space trading game, the alignment of whether you were good or evil was a single signed 2 byte word, and it would flip if you gained to much notoriety or infamy passed 16,000.
Why “Boney?”
Boney plates sticking out his back?
She’s a big fan of Mother 3?
I would have gone with Spike.
I thought it was a reference to Weinerville TV show. Now leave me alon-ey!
History buff? Bonaparte would be proud.
“Why Gandhi Is Such An A**hole In Civilization” from Kotaku
http://kotaku.com/why-gandhi-is-such-an-asshole-in-civilization-1653818245
I think they are trying to reference this. Aggression in Civ I was a scale from 0 (peaceful) to 11 (warmonger) but the scale used was 0 to 255 (simple 256 charters). Normally, you use this same scale as -127 to 127, though any 256 range will theoretically work (1000 to 1256)
Now this led to a comical problem; Gandhi used 1 for his aggression, but when democracy was researched, everyone drops aggression by 2; Gandhi went to -1
The scale is 0 to 255, there is no -1, so it rolled over and went to 255, which was 25x more warmonger than Khan! He instantly declared war on EVERYONE.
I do remember having to wipe India off the face of the planet a couple of times.
My favorite Civ game I actually got to build the Giant Robots and run amok all over Japan. Good times.
Actually, Baron’s last comment is quite insightful.
There could be a secret message encoded in that dinosaur via steganography.
Lovelace is smart. She’ll figure that out in a triceratops.
I dino. She may not be able to raptor mind around the idea.
Whoa, whoa. Check yourselves before you Rex yourselves here, friends.
Er, I’m afraid you’ve messed up. It’s 8-bit integers that have a range from 0 to 255, for a total of 2^8 = 256 values.
A 256-bit integer would be enormous. Most computers these days use 64-bit numbers as their basic units, and those can take 2^64 different values, or between 0 and 18 billion billion. A 256-bit int would be for storing numbers like “How many atoms are in the nearest million galaxies”
My Boney lies over the ocean,
My Boney lies over the sea,
My Boney lies over the ocean,
Oh bring back my Boney to me!
(Come on, you were all thinking it.)
/raises hand/
Uh, I wasn’t.
This reminds me of India in the first Civilization game. Their default aggression level was close to zero, but there was no way for the game to represent a value lower than zero–meaning that, if circumstances in the game caused their aggression to drop below zero, the game would just circle the aggression level all the way up to ten, which is basically “drop a nuke on anyone who disagrees with you about anything.” So people playing the first Civilization got to witness the bizarre sight of Mahatma Gandhi threatening them with nuclear weapons–and then following through on said threats!
Just like in the real world!
The AI for Gandhi was hand-coded to recreate the bug in several later games too, despite proper gate-checking in the games after that point.