The Infinite Space Between Words

The conversion factor from miles to kilometers is less than 2, which is often less than the error term on many astronomical distances. When you see X miles just think between X and 2X km since we’re all familiar with multiplying by 2. :slight_smile: Actually, 1.5 is a decent approximation so to get a closer estimate go for halfway between X and 2X km.

2 Likes

I think both statements are true. The computer is much faster and much more accurate but cannot “make the the most trivial common sense deductions” on its own.

This discussion reminds me of a quote from John Cook’s blog…

http://www.johndcook.com/blog/2013/01/24/teaching-an-imbecile-to-play-bridge/

This is going to come off as super-science-fiction’y, but I really can’t help but think about the Matrix when I read this post.

To heck with “we’re too slow for computers”, I wanna get to the future where we can plug into the computer and live thousands of lifetimes in a single minute. Just imagine what that could mean for dying patients to be able to live ‘forever’, for scientific research, and for whole fields to just be able to spend seconds of real world life to gain decades of experience.

Heck, we’ve already got Spock’s tablet, and that seemed rather futuristic back then. I love imagining a future where we have all sorts of things we once thought could solve the world’s problems… just so we can take them for granted

Think of it as a stimulus package. It can’t be more expensive then the huge sums we throw at the banking sector. This may actually end up in the hands of normal people.

1 Like

I agree that my numbers are a broad benchmark for “how much better is disk performance on SSD vs HDD overall” versus specific to latency. The test you cited is kind of an extreme load test though, and won’t reflect average latency of a typical request.

  • 4,200 rpm – 7.14 ms
  • 5,400 rpm – 5.56 ms
  • 7,200 rpm – 4.17 ms
  • 10,000 rpm – 3 ms
  • 15,000 rpm – 2 ms

One thing I love about SSDs is that they are attacking the worst-case performance scenario, when information has to come off the slowest device in the computer, the hard drive. So SSDs are reducing the variability of requests for data massively:

Without SSD:
0.9 ns → 10 ms (variability of 11,111,111× )

With SSD:
0.9 ns → 150 μs (variability of 166,667× )

So that’s an improvement in overall performance variability of 66×!

I think you missed a crucial point in your story : us humans live in continuus time, while computers ‘live’ in discrete time.
For us, the distance between words is possibly infinite, we could split that second in a million parts and the cells in our body would still be doing something in that milionth of a second, something ‘decisional’ to push a thought or an action forward.
For computers everything is finite, from one instruction to the next. Between instructions there is only emptiness. And some electrical current going from a NAND gate to another.

And brains just have electrical current going from one neuron to another - not really a fundamental change there (brains are essentially what we call an analogue computer; we can build those using OpAmps, but building one on the scale of even a common standard cpu would be a daunting task)

What I find difficult to follow is the concept that a computer will, just because it can do basic math faster, somehow “think” faster as an AI than a human does or can. Early AIs will almost certainly think much slower than humans; as that improves, eventually they will think as fast if not faster than humans. But so MUCH faster? there is no reason why they or we would ever build a computer that can think an order of magnitude faster than a human, simply because the speed of thinking isn’t something of major benefit to an entity outside of “fast enough to hold an intelligent conversation with other entities”; why double the speed at which a computer can think, when you could use that additional computing power to give it better access to the pool of knowledge, faster (but “dumb”) computation, space to run hardcoded routines to do the day-to-day stuff (which to be fair is what the brain does too; when you walk, you don’t consciously decide how to move each leg and how to balance on your foot while your center of gravity shifts; that’s taken care of by learned routines, “reflexes” in human terms, which can operate without you needing to waste the fairly limited supply of thought on it) and so forth.

I can certainly see how, in order to do research in pure disciplines, you could run an AI at a high multiple of human thought speeds, and have it communicate in a high-speed network with other minds working on similar problems; however, for day-to-day usage, there would be little point in paying the computing and power costs to process a million times faster than a human, if the world you live in is plodding along at human speeds.

The time for reboot (5 minutes?) seems a bit off by todays standards. A dual core i2 booting Win 8.1U1 from WD Black, fully kitted out with service laden apps, such as Visual Studio, Office 365, Creative Cloud and multiple SQL Server instances takes about 30s to login screen and about 1 minute to Desktop (and the most significant delay in this seems to be establishing the wireless Internet connection). I don’t think I have ever seen a PC take 5 minutes outside of trying to run a modern OS on the bottom tier of obsolete hardware.

These are great! Thanks for sharing them.

I think you misread the first number from Norvig’s article, though. He sais:

execute typical instruction: 1/1,000,000,000 sec = 1 nanosec

which isn’t the same as saing that 1 CPU cycle = 1 ns.

Before I finally got my SSD a full reboot took ~20 minutes. And yeah, I might not get workstation hardware but it’s a nice desktop.

The old spinning hunk of metal hard drive is definitely an evil that most who have moved to SSD don’t even remember the pain of enduring.

The last time we tried to switch to metric, people actually shot at the metric signs.

The above Internet times are kind of optimistic

I think the times given are nominal one-way times; you have to double them to get round-trip times.

California to US east coast is currently about 80 ms round-trip (so nominally 40 ms each way).

California to UK is currently about 160 ms round-trip (so nominally 80 ms each way). (See data below.)

In fact, the times given must be nominal one-way times, because claiming a 40 ms round-trip time between SF and NYC would be faster than the speed of light in fibre.

An interesting observation emerging from this fact is that parts of the Internet have already got within less than a factor of two of the theoretical optimum predicted by Einstein. Not many human endeavours can make that claim. We may continue to lower Internet latency, but we’ll never see a 10x improvement on the SF-NYC round-trip time.

Here’s something I wrote on this topic about 18 years ago: http://stuartcheshire.org/rants/Latency.html

Stuart Cheshire

— lcs.mit.edu ping statistics —

% ping -c 10 lcs.mit.edu
PING lcs.mit.edu (128.30.2.121): 56 data bytes
64 bytes from 128.30.2.121: icmp_seq=0 ttl=45 time=74.269 ms
64 bytes from 128.30.2.121: icmp_seq=1 ttl=45 time=74.198 ms
64 bytes from 128.30.2.121: icmp_seq=2 ttl=45 time=75.381 ms
64 bytes from 128.30.2.121: icmp_seq=3 ttl=45 time=74.460 ms
64 bytes from 128.30.2.121: icmp_seq=4 ttl=45 time=74.152 ms
64 bytes from 128.30.2.121: icmp_seq=5 ttl=45 time=74.157 ms
64 bytes from 128.30.2.121: icmp_seq=6 ttl=45 time=74.213 ms
64 bytes from 128.30.2.121: icmp_seq=7 ttl=45 time=74.100 ms
64 bytes from 128.30.2.121: icmp_seq=8 ttl=45 time=74.103 ms
64 bytes from 128.30.2.121: icmp_seq=9 ttl=45 time=74.092 ms

10 packets transmitted, 10 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 74.092/74.312/75.381/0.371 ms

cam.ac.uk ping statistics —

% ping -c 10 cam.ac.uk
PING cam.ac.uk (131.111.150.25): 56 data bytes
64 bytes from 131.111.150.25: icmp_seq=0 ttl=45 time=153.824 ms
64 bytes from 131.111.150.25: icmp_seq=1 ttl=45 time=153.980 ms
64 bytes from 131.111.150.25: icmp_seq=2 ttl=45 time=160.978 ms
64 bytes from 131.111.150.25: icmp_seq=3 ttl=45 time=154.426 ms
64 bytes from 131.111.150.25: icmp_seq=4 ttl=45 time=154.310 ms
64 bytes from 131.111.150.25: icmp_seq=5 ttl=45 time=153.809 ms
64 bytes from 131.111.150.25: icmp_seq=6 ttl=45 time=153.864 ms
64 bytes from 131.111.150.25: icmp_seq=7 ttl=45 time=154.885 ms
64 bytes from 131.111.150.25: icmp_seq=8 ttl=45 time=155.006 ms
64 bytes from 131.111.150.25: icmp_seq=9 ttl=45 time=154.196 ms

10 packets transmitted, 10 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 153.809/154.928/160.978/2.056 ms
1 Like

Thanks for the spoiler! Would it have hurt to put a warning or to finish the Her quite a couple sentences earlier?

Haha, I’d like to see more on that metric sign shooting! :stuck_out_tongue:

I do not live in the US but I did for a while. It is hard to live with 2 metric standards (and it’s not just metric, btw), specially with a messed up one, when you communicate internationally.

I just hope, like rioki there, people who do reach out try and remember as hard as possible to use the proper standard. Maybe over time this scenario can be changed in the US. :smile:

Could you please elaborate more on this? O_o

Please see the detailed document
IntractableStudiesInstitute.org/ProjectAndros.pdf and more in the free
ebook Penzar.pdf. I just passed the halfway point of a 5 year project to be
the first to encode my mind into an android robot, way ahead of the curve
and not even on their radar! I am using my own independently developed M1
Architecture.

–> Patrick Rael, Director, Intractable Studies Institute
%! ps When all else fails, come to the Institute!

From the post:

To computers, we humans work on a completely different time scale, practically geologic time.

Now imagine if Earth is actually alive and intelligent. How long will it take us to come up with solid evidence of that? And/or how long would actually take computers to understand humans are intelligent and thus be able to communicate with us?

The best definition of “intelligence” I’ve seen so far is “capacity to predict the future”. If the Earth can do that but never act on it, it will take a long time before we can realize it. And, I speculate, so will the computers… A long time, at least, in their time scale! Yay for us! :smiley:

That is if they don’t kill themselves in changing their environment (us) first.

You do realize all that sounds insane, right? I always hope such insane ideas will indeed lead us (the humanity) into something… Heck, I did even gave some bucks to solar freaking roadways, and it’s not like that is a stretch this far, but it is crazy enough that it will probably not become a reality!

The PDF you provided is too much craziness for me. I don’t know what’s an MIQ, nor a M1 Architecture, but I do know this project have many signs pointing it is set for failure, IMHO.

That strikes me as rather funny because gun owners seem to have no trouble telling the difference between .40 caliber and 9 mm, or 7.62 vs .308. It would be especially funny if they were shooting at the metric signs using metric ammunition.

2 Likes

I think it does not really matter if earth is intelligent or not because it lacks the ability of taking any (inter)actions. It can be chopped from our reasoning by Occam’s razor.

Computers on the other hand simply can not recognise our intelligence because they are not able to “think”, they are only able to perform preprogrammed tasks, and if it ever changes, I’m pretty sure that their coders will design thinking/intelligent software with the built-in knowledge of human intelligence.

And… I define intelligence as polimorphic behavior which could handle the abstractions of future and past.

1 Like