On 24 January 1926, the British ship SS Antinoe was damaged by a hurricane and was at risk of sinking. The American ship SS President Roosevelt assisted in rescue and docked in Plymouth, England. A photograph of the rescue was published in both London and New York the next day. The quickness with which this photo reached New York apparently caused such a sensation that Science and Invention magazine published the linked infographic on how it was accomplished; briefly: the image was sent as a digital bitmap by transatlantic telegraph cable.
There's a synchronous and instantaneous nature you don't find in modern designs.
The image is not stored at any point. The receiver and the transmitter are part of the same electric circuit in a certain sense. It's a virtual circuit but the entire thing - transmitter and receiving unit alike - are oscillating in unison driven by a single clock.
The image is never entirely realized as a complete thing, either. While slow phosphor tubes do display a static image, most CRT systems used extremely fast phosphors; they release the majority of the light within a millisecond of the beam hitting them. If you take a really fast exposure of a CRT display (say 1/100,000th of a second) you don't see the whole image on the photograph - only the most recently few drawn lines glow. The image as a whole never exists at the same time. It exists only in the persistence of vision.
Just wanted to add one thing, not as a correction but just because I learned it recently and find it fascinating. PAL televisions (the color TV standard in Europe) actually do store one full horizontal scanline at a time, before any of it is drawn on the screen. This is due to a clever encoding used in this format where the TV actually needs to average two successive scan lines (phase-shifted compared to each other) to draw them. Supposedly this cancels out some forms of distortion. It is quite fascinating this was even possible with analogue technology. The line is stored in a delay line for 64 microseconds.
See e.g.: https://www.youtube.com/watch?v=bsk4WWtRx6M
At some point, most NTSC TVs had delay lines, too. A comb filter was commonly used for separating the chroma from the luma, taking advantage of the chroma phase being flipped each line. Sophisticated comb filters would have multiple delay lines and logic to adaptively decide which to use. Some even delayed a whole field or frame, so you could say that in this case one or more frames were stored in the TV.
You can decode a PAL signal without any memory, the memory is only needed to correct for phase errors. In SECAM though, it's a hard requirement because the two color components, Db and Dr, are transmitted on alternating lines, and you need both on each line.
It doesn’t begin at the transmitter either, in the earliest days even the camera was essentially part of the same circuit. Yes, the concept of filming a show and showing the film over the air existed eventually, but before that (and even after that, for live programming) the camera would scan the subject image (actors, etc) line-by-line and down a wire to the transmitter which would send it straight to your TV and into the electron beam.
In fact in order to show a feed of only text/logos/etc in the earlier days, they would literally just point the camera at a physical object (like letters on a paper, etc) and broadcast from the camera directly. There wasn’t really any other way to do it.
Our station had an art department that used a hot press to create text boards that were set on an easel that had a camera pointed at it. By using a black background with white text you could merge the text camera with a camera in the studio and "super-imposed the text into the video feed.
"And if you tell the kids that today, they won't believe it!"
The very first computers (Manchester baby) used CRTs as memory - the ones and zeros were bright spots on a “mesh” and the electric charge on the mesh was read and resent back to the crt to keep the ram fresh (a sorta self refreshing ram)
Yes, but those were not the standard kind of CRTs that are used in TV sets and monitors.
The CRTs with memory for early computers were actually derived from the special CRTs used in video cameras. There the image formed by the projected light was converted in a distribution of charge stored on an electrode, which was then sensed by scanning with an electron beam.
Using CRTs as memory has been proposed by von Neumann and in his proposal he used the appropriate name for that kind of CRT: "iconoscope".
DRAM memories made with special CRTs with memory have been used for a few years, until 1954. For instance the first generation of commercial electronic computers made by IBM (scientific IBM 701 and business-oriented IBM 702) have used such CRTs.
Then the CRT memories have become obsolete almost instantaneously, due to the development of magnetic core memories, which did not require periodic refreshing and which were significantly faster. The fact that they were also non-volatile was convenient at that early time, though not essential.
Today, due to security concerns, you would actually not want for your main memory to be non-volatile, unless you also always encrypt it completely, which creates problems of secret key management.
So CRT memories have become obsolete several years before the replacement of vacuum tubes in computers with transistors, which happened around 1959/1960.
Besides CRT memories and delay line memories, another kind of early computer memory that has quickly become obsolete was the memory with magnetic drums.
In the cheapest early computers (like IBM 650), the main memory was not a RAM (i.e. neither a CRT nor with magnetic cores), but a magnetic drum memory (i.e. with sequential periodic access to data).
Yeah it super weird that while we struggle with latency in the digital world, storing anything for any amount of time is an almost impossible challenge in the analog world.
ambitions of encyclopedic scope, something like "hoping to cover all the domains of knowledge"; in the context of a photographer, to photograph everything worth photographing
I get where you're coming from, but "encyclopedic" has a pretty specific definition, and it doesn't twin with "ambition" in any sensible way. It seems like it should, but it doesn't. I'm commenting on the poor writing. Maybe "grand ambitions", or "colossal ambitions"? "encyclopedic ambitions" just doesn't work, unless we're going with vibe-writing :-)
4 MB of SRAM in the '80s would have been the main RAM of a supercomputer.
We still use SRAM today. It's what level-1 cache and registers are implemented with - actual flip-flops, can be toggled with one cycle delay. Supercomputers used to make their entire main memory out of SRAM, effectively the whole thing was L1 cache.
The 486 has an on-chip cache - 8 or 16 KB of SRAM. Very large for the time.
Off-chip access to the DRAM involves wait states. The read or write is stalled until the DRAM enters the appropriate state. The 486 would also do block reads of 16 bytes at a time to fill an entire cache line. This is around the time main memory and the CPU became increasingly decoupled.
Avoiding all the complexity of managing DRAM is why hobbyists use SRAM these days. Basically: to avoid cost. Ironic!
I've done similar loose experiments using a 19th century French novel I have read (L'Agonie by Jean Lombard), and which is quite obscure. There is no English translation of the novel published.
Using Gemini Notebooks I've found it passable at summarizing chapters, listing characters, major theme and so on, and it can do this in English.
"What is the symbolism of the Black Stone?"
> In the sources, the Black Stone (referred to as the Pierre-Noire) serves as the central religious icon of the Syrian Sun cult brought to Rome by the Emperor Elagabalus. Its symbolism is multifaceted, ranging from fertility and virility to a mystical pursuit of universal unity.
> It represents the perennity of the Sun, which fertilizes the world, causing "germs to rise" and spreading them through the atmosphere. It is viewed as the "definitive form" of divinity, intended to absorb and replace the "transitory forms" of all other Roman, Greek, Egyptian, and Persian gods, including the Christian "Kreistos".
> Virility and Phallic Symbolism
>
> The sources explicitly characterize the Black Stone as a gigantic sacred phallus. It is described as:
• An icon of virility and the "organ of generation" • A "unisexual icon" that materializes the generative force of nature • A representation of "virility in activity," which is why it is often paraded and elevated in an "orgasmic" or "colossal adoration"
> The Androgyne and Universal Unity - Beyond simple fertility, the philosopher Atillius explains a more complex, "mad" metaphysical project associated with the stone. It symbolizes "Life One" (Vie Une) and the return to a unisexual state of perfection. • The Androgyne: Atillius believes that by pursuing "the male sex by the male sex," the cult "inutilizes" the female sex to eventually create the Androgyne—a self-sufficient being containing both sexes • Unity: The stone signifies the fusion of all generative forces into a single Unity, reversing the "separation of the sexes" which is viewed as a state of unhappiness and impotence. • Marriage of Moon and Sun: The ritual marriage of the goddess Astaroth (representing the Moon and the female principle) to the Black Stone (representing the Sun and the male principle) symbolizes the merging of the Orient and Occident into this unified life principle
>
> Destruction of the Symbol - The Black Stone ultimately becomes a symbol of Oriental pollution and decadence to the Roman populace. During the final rebellion against Elagabalus, the stone is torn from its temple on the Palatine, defiled with filth, and broken into pieces to ensure that its "signification of Life" would never again dominate Rome.
This is all accurate to the book, even teasing out a couple themes that were only subconsciously present to me.
The NotebookLM version gives citations with links to the original text to support all these assertions, which largely are coherent with that purpose.
The input is raw images of a book scan! Imperfect as it is it still blows my mind. Not that long ago any kind of semantic search or analysis was a very hard AI problem.
Not quite the same analysis. The human is better, no surprise. But the NotebookLM output links back to the original book in a very useful way. If you think about it as fuzzy semantic search it's amazing. If you want an essay or even just creativity, yes it's lacking.
It doesn't have to be the same analysis to put it in a partially overlapping vector space. Not saying it wasn't a useful perspective shuffling in the vector space, but it definitely wasn't original.
LLMs haven't solved any of the 2029 predictions as they were posited. But I expect some will be reached by 2029. The AI hype acts like all this is easy. Not by 2029 doesn't mean impossible or even most of the way there.
LLMs will never achieve anything as long as any victory can be hand waved away with "in the training set". Somehow these models have condensed the entire internet down to a few TB's, yet people aren't backing up their terabytes of personal data down to a couple MB using this same tech...wonder why
It wasn't a hand wave. I gave an exact source, which OP admitted was better.
They certainly haven't "condensed the entire internet into a few TBs". People aren't backing up their personal data to a few MB because your assumption is false.
Maybe when people stop hand waving abilities that aren't there we will better understand their use as a tool and not magic.
Sensory disabilities like deafness and blindness are disabling because the world is not oriented to people with sensory disabilities.
I am reminded that the Deaf have their own mythology. American Sign Language is distinct; it's not English. Accordingly it has its own culture, including its own myths. Many of them are fables and stories from the western tradition slightly adapted. But some are original.
One common theme in American Deaf mythology (but I'd bet it's told elsewhere too) is stories about a world which is visually oriented. There's an ASL word for this world but English doesn't have one. Sometimes it's translated as Eyeth a.k.a. "Eye-Earth".
It's more than just a world where everyone is deaf or where everyone communicates in ASL. It has something like spiritual meaning to some of those who tell stories about it; in that world the Deaf are not disabled, not in the social way that matters.
Reminds me of The Country of the Blind by HG Wells.
It’s about a guy who finds his way into a valley in a mountain range where everyone has been blind for generations. At first he thinks that he’ll have “a superpower“ because he’s sighted. Instead the people of the valley view his sight as an illness.
I'm learning ASL. That led me to learn about Deaf culture in North America. The stories that the Deaf have told each other, and have passed down. A world where everyone is deaf is one of the first stories you'll learn about; I'm not even sure when I first encountered it, but it was in that context.
One common modern version of the fable is told with an astronaut who finds that they've landed on a parallel Earth where everyone is Deaf and sign language is the norm.
The book A Study of American Deaf Folklore by Susan D Rutherford is a bit dated now but interesting in exploring the functions and roles of myths here.
No, deafness and blindness are disabling because they provide critical long range data. Being able to see is essentially a superpower if you are blind. Same with hearing.
Maybe, but that isn't really what the GP post is talking about. At the level of mythology, the eye-earth is place where people of that group belong without judgment or limitation. No different from Harry Potter or Narnia or any other fantasy place one might imagine going where they can be with their people.
In any case, I'm not sure this even survives transposing to other senses that humans are weak in, such as smell (like prey animals) or magnetic direction (like migratory birds). A human who randomly had these would indeed be seen as superpowered, but that wouldn't become a statement that all regularly-abled humans are now disabled for missing the "critical" long range sense.
I wonder whether all the animals of Eyeth are also deaf, and how they are doing?
Deaf predators must have a field day sneaking up on deaf prey.
As life evolved on Earth, so did the senses that life forms possess, and that happened for a reason. If you hare missing some senses, there is a sense in which you are set back millions of years of evolution.
It's not just about human society, but biology.
Someone with no sensory disabilities, sent into the wilderness, has better chances of survival than someone with such disabilities, other factors being equal. That has nothing to do with society, which is absent from that scene. Civilization is the best place for people with disabilities, even if it is geared toward those without. For that matter, it's better for animals with disabilities. People help disabled pets lead quality lives; wild animals with disabilities don't live long.
That's all factually correct. Though both things can be true: Disabilities can be a disability in themselves and additionally the disabled can also be disabled by the society around them. Someone fully blind might not be able to distinguish some poisonous mushroom from an edible one with the same shape and smell but different color. That is a fundamental limitation of the inability to see. But blind people can for example still read. They are often just not provided by others with writings that are accessible to them, although that would be possible and is not a fundamental limitation of their condition.
Also ableism and othering are very much a thing that disables peoples' ability to function in a society and come exclusively from the social environment rather than from the disabled themselves.
I wouldn’t read too much into the logic of mythological worlds and realms.
Their purpose is narrative, not scientific. They don’t even need to be internally consistent.
No one expects Greek mythology to make scientific sense. Other mythologies should be seen from a similar perspective and understood that they are narrative, not logical.
Applying a scientific viewpoint to such mythologies results in a new narrative. The scientific view is always wrong unless scientific correctness is part of that world’s narrative.
I add this because a lot of people don’t know narrative purpose.
To put it briefly:
Other peoples worlds aren’t wrong when they don’t match “what makes sense in the real world”.
Meh, my formidable powers of foresight aren't really a superpower. Few people listen until things have progressed far enough that they see the things, too, by which point there are rarely many interventions available. And every time we do intervene early, that's "you said this would happen and it didn't happen!", making it harder to convince people the next time. And when things do turn out more-or-less as predicted, I "made a lucky guess" because "there was no way you could have known that".
In the land of the blind, why would anyone pay attention to this weirdo's ramblings about "rain-clouds"? Obviously they're just feeling changes to temperature, pressure, and humidity. Oh, and they know what shapes things are? Wow! So does everyone else who's touched the things. Sure, that "how many fingers am I holding up?" party trick is pretty neat (probably cold reading), but not something we should make policy decisions on the basis of.
Vision is absolutely a superpower if everyone else is blind. Just think how far you can shoot something with a rifle and scope. Guns are useless to blind people. A person who can see has an enormous advantage over a blind person in a fight. Try to imagine a military where everyone is blind fighting against another where everyone can see.
And, again, is one person going to develop those? A person with access to elastic rope might invent the slingshot, but I wouldn't expect them to invent the far superior sling: it's not obvious that the sling is better, since the learning curve is steeper. And a slingshot is not a particularly effective weapon: it's an inefficient bow that can't fire arrows.
You're still thinking in terms of "sighted society versus blind society", which is not what we are discussing. (Unless you're thinking "sighted and superintelligent", in which case I'd say sight is probably redundant.)
Ok. Just evading blind people would be absurdly easy if you can see. You could accurately throw rocks and run away from them all day. And being attacked from a distance would be terrifying to blind people.
Blind people are no less capable of throwing stones, and you only have the flight advantage if the ground is potentially-treacherous (e.g. unmanaged forest, scrubland) or you're that much faster. Any inhabited area will have been engineered to be safe for people to navigate – and it will not be lit well at night, where your reliance on vision will put you at a skill disadvantage.
The main advantage in an urban combat environment, I think, would be the ability to detect quiet people at a distance. Not needing to see makes it easier to hide yourself from visual inspection, but why would anyone develop this skill if nobody can see? Then, if the only person to practice with is the enemy you're trying to hide from… Also, you'd be able to dodge projectiles by watching the person throwing them, who might not telegraph their throws audibly, but would probably do so visually. This would let you defeat a single ranged opponent, possibly two – though I doubt your ability to dodge the rocks from three people at once for long enough to take one down.
But what do you gain from winning fights against small numbers of people? (I doubt very much you could win against a group of 30 or 40 opponents, with only sight as your advantage.) You would run out of food, shelter would be hard to come by, and every theft of resources would risk defeat: and one defeat against a society means it's over. Either you're killed, imprisoned, or they decide to do something else with you, presumably depending how much of a menace you've been. Your only options are to attempt a self-sufficient lifestyle (which you probably won't survive for long), to flee somewhere they haven't heard of your deeds, or to put yourself at the mercy of the justice system (and hope it isn't too retributive).
"Blind people are no less capable of throwing stones"
They sure suck at aiming.
But the best way to exploit the ability to see when everyone else is blind is to provide a service blind people can't. You could be a much better doctor and diagnose diseases based on site and perform surgery much better.
Only in that narrow viewpoint. Most people talk about disability in the context of a society because much of what we encounter in our day to day is created by other people. The sights, sounds, smells, and experiences in our world are frequently because of others. So in that context, if the dominant culture makes it a point to create experiences that require hearing or sight to consume, then yes it's a disability. But if we adapt some or all of what we do for people who don't have those senses, then we can make it less disabling.
While it's good for society to accommodate those with disabilities as much as possible, we shouldn't pretend it isn't detrimental to be unable to see or hear. You don't need to believe obvious falsehoods in order to accommodate people.
I’ve always found this semantic argument somewhat silly as being blind or deaf is an obvious disadvantage in natural contexts, but one of the more compelling ideas here is that the fitness boundary isn’t fixed. It would probably be a fitness advantage if I could sense electromagnetic fields, but no one would describe me as disabled for not being able to sense these fields—unless, perhaps, everyone else could.
So what we consider to be a disability does seem to be a function of what we consider to be normal.
The point is that the capability is measurable but the capabilities we consider to be essential are based on normalcy and thus effectively arbitrary. Eugenicists make the argument that evolution demonstrates that the classification is not arbitrary because deafness and blindness confer measurable fitness disadvantages, but they don’t actually bridge the gap of deriving an ought from an is.
> Obviously?
If the answers to these problems are obvious to you, perhaps you’d consider writing a book instead of participating in a discussion forum. I would encourage you to review the site guidelines.
If it were a fitness advantage if you could sense electromagnetic fields, then why have you evolved over billions of years to get where you are, without it?
But wait, you do sense electromagnetic fields in the 380 to 750 nm wavelength range, and remarkably well, to great profit.
The only fitness advantage that matters for evolution is whatever gets you to pass down your genes, versus someone else not passing down theirs. If sensing low-frequency electromagnetism, or static magnetic fields, were advantageous in the context of everything else that you are, for passing down your genes, you would have it by now.
Migratory birds can sense the Earth's magnetic field for navigation; if you needed to migrate thousands of kilometers every year (due to lacking other advantages to make that unnecessary), you might evolve that.
Evolution is highly path dependent and stochastic, so I’m not sure your logic follows.
Eg, the laryngeal nerve in giraffes is ridiculous — but having gone down that path before their current form, there’s little way to fix it. They’re now stuck in a local optima of long necks (good) with poor wiring (bad).
Vision has evolved numerous times, with estimates suggesting eyes or light-sensitive spots have appeared independently at least 40 to 65 times, possibly even 100 times, across different animal lineages.
Hearing has evolved numerous times independently, at least six times in major vertebrate groups (mammals, lizards, frogs, birds, crocodiles, turtles) for airborne sound and at least 19-20 times in insects
Vision and hearing have evolved so many times because they give an absolutely huge survival advantage.
To my knowledge, photosensitivity has arisen a few times independently and eyes again a few times from shared photosensitive receptors in animalia but I'm fairly sure hearing in the groups you mention is a tetrapod synapomorphy.
Yes it it path dependent; my example alludes to it. Birds benefit from being able to sense the magnetic field for navigation precisely because they evolved the ability to fly, and the endurance to do that over long distances. In that context, not losing your bearing is a fitness advantage.
> if I could sense electromagnetic fields, but no one would describe me as disabled for not being able to sense these fields—unless, perhaps, everyone else could.
Light is EM fields. A possible scenario is a battle at night with others having night vision equipment and you don’t. You can absolutely be described as disabled or being at a significant disadvantage.
Because, like you say, what we consider normal in that scenario is to have a proper night vision equipment.
You've set up a straw man here - nobody in this thread is claiming that it's not detrimental to be missing a sense.
The point is that disability exists within the context of the world we live in, and the society we've built is one that largely assumes people have both sight and hearing.
Ah, I see the disconnect. In this discussion, "disabling" is not the same as "detrimental." Disabling is when you are unable to do important activities that others can do. I'm not an expert here on the subject, but this is my understanding.
For a simplified example, imagine two government buildings, one with and one without an accessibility ramp. A person in a wheelchair is able to access the former, even if going up the ramp takes longer than the stairs. Not having the option to take the stairs is still detrimental to the person, but they're still able to access those services. The second disables the person, as they're no longer able to access important services because they are unable to take the stairs.
Accommodations help keep "detrimental" from meaning disabled. The voice at the street crossing that says "walk", curb cuts, and closed captioning all help people participate in daily normal life, despite having those sensory disabilities.
There are other designs that are more holistic as well - for example, if those same government services are accessible online, or the agent makes house calls, it naturally makes the services more accessible to more people. (Note: I'm not saying that this specific example is a good idea - just as an example of "how we design our society affects how people can participate in it.")
Since I'm the person who wrote that I can explain what I meant.
I have never had to deal with a giant cat stalking me and being unable to hear it. I do routinely have to deal with intercom systems which I cannot hear, though.
The world most humans inhabit is human-made. And the human-made environment can be remade.
You're cheating with your world knowledge to guide the parsing.
eat man lion. lion man eat. man eat lion. eat lion man.
Who is eating who? When formed according to English grammar it doesn't leave any ambiguity even if the phrase is improbable: "The biscuit has eaten the girl."
Linguistic topology is the study of patterns in languages according to structure. It's a niche topic which is unfortunate because certain patterns hint at something about the structure of human thought.
Such as with word order. Verb in the middle or at the start or at the end? Subject before verb or after verb? Object before verb or after verb? Every permutation does exist in some language.
But object before subject and verb is extremely rare. And in the few languages which do it that way they do not do consistently with it often only occurring in certain moods or certain conditions of syntactic alignment.
Language cannot be decoupled from what it is trying to work with. It is a tool! We can manipulate the air in such a way with our mouths that ears can hear.
I don't think there is anything wrong with allowing a small amount of "world knowledge" to guide language parsing - the world caused language to "be" not the other way around.
Anyway whenever, outside of smoking crack, did a girl get eaten by a biscuit? Never, so that phase is unambiguous.
Object before subject: I'll grant you that - its a probable sign of madness or a green puppet.
The GBP/USD currency pair is still known just as "the cable".
Aside from all its other uses: the telegraph gave a way to synchronize clocks. And accurate time is accurate measurement of distance.
> [...] The latest determination in 1892 is due to the cooperation of the McGill College Observatory at Montreal, Canada, with the Greenwich Observatory. [...] The final value for the longitude of the Harvard Observatory at Cambridge, as adjusted in June, 1897, is 4h 44m 31s.046 ±0s.048.
One of the major uses for the telegraph was the first funds transfers that could happen quicker than moving paper (or bullion) from one location to another. London banks would telegraph correspondent banks in India, Australia, etc.
This essentially doubled the capital intensity of international trade since the goods had to move in one direction but the money could be sent instantaneously in the other.
When the pound replaced the Spanish silver dollar as the default global currency, it did so with a nascent international banking system where banknotes issued by a certain bank in a certain location could be exchanged by other banks in other locations.
Payments were thus often settled in metal rather than being transacted with it.
reply