Hacker Newsnew | past | comments | ask | show | jobs | submit | nasretdinov's commentslogin

Why did we make just an infrared telescope then? Why don't go into even lower frequencies, surely we would detect something too if we just look?

Because near/mid infrared has many uses other than high-z objects, and it’s been something of a relative blind spot to us until now, although before Webb we did have Spitzer.

For far IR/submillimeter observations we had Herschel in space, SOFIA in the stratosphere (flying on a 747), and several large terrestrial telescopes at very high altitudes can also observe at FIR/submm wavelengths. But sure, there are likely many astronomers who would love nothing more than a new spaceborne FIR telescope, given that it’s been more than a decade since Herschel’s end of mission, and SOFIA was also retired in 2022.

For microwave we’ve had several space telescopes (COBE, then WMAP, then Planck), mainly designed to map the cosmic microwave background. That’s the farthest and reddest that you can see in any EM band, 300,000 years after the big bang.

Past microwave, that’s the domain of radio astronomy, with entirely different technology needed. We have huge radio telescope arrays on the ground – the atmosphere is fairly transparent to radio so there’s no pressing reason to launch radio telescopes to space, and their size would make it completely infeasible anyway, at least until some novel low-mass, self-unfolding antenna technology.


This may be a silly question, but would you be able to create an interferometer style telescope array in space via a platform like starlink, ie small, inexpensive sats? Would that reduce/eliminate the need to launch large singular antennas?

That would probably be difficult at optical wavelengths. At radio wavelengths you might have a better shot, but we can build radio interferometric telescopes on Earth and since the atmosphere is relatively transparent at radio frequencies, you probably aren't going to get any advantage by trying to build one in Earth orbit.

Though not the same thing, you may be interested in https://en.wikipedia.org/wiki/Laser_Interferometer_Space_Ant...


There is a mission concept for a far-infrared interferometer: https://asd.gsfc.nasa.gov/spice/

One would need to go to space for that of course.


>you probably aren't going to get any advantage by trying to build one in Earth orbit.

People want to put a radio telescope on the far side of the moon, so that it doesn't have interference from terrestrial RF sources:

https://en.wikipedia.org/wiki/Lunar_Crater_Radio_Telescope

...and your spatial resolution is proportional to the size of your telescope. So you could have really high resolution if you speckled your interferometric telescope array units around L1, L2, L4, and L5.


The lower the frequency, the larger the wavelength and thus the larger the cupola needed to detect it. That's why radiotelescopes are on earth, they are HUGE.

Radio telescope dishes are huge so that they can receive (or even transmit in the case of Arecibo, which is gone now) a narrow beam. At long wavelengths you need something huge to get a narrow beam.

But you can also use multiple, much smaller antennas to synthesize a narrow beam, and those little antennas are often dishes but can also be very simple and rather small antennas.


interferometry is good for seeing small objects, but not faint objects. for faint objects there's nothing that works better than a giant dish

Excellent question!

The longest wavelengths of light are generally classified as "radio".

So radio telescopes have been tasked to explore the very early universe.

https://en.wikipedia.org/wiki/Reionization

If I understand it correctly, the "Period of Reionization" is first light we can see from processes like stars and galaxies.

There was ionized plasma at the beginning but the universe was like a really thick fog everywhere, and that first light was scattered around and you can't really see stars. As the universe expanded, that fog cooled down, and you could see, but cold matter doesn't emit much light, so there wasn't much to see. It took a while for gas clouds to collapse into the first stars, heating up the gas to ionized plasma once again, so it's re-ionized matter.

The Low Frequency Array, LOFAR, has been used to study this "Cosmic Dawn".

The Square Kilometer Array was designed to explore this era.

But! Not a radio telescope JWST has revealed unexpected, huge globs that seem to be galaxy-sized gas clouds collapsing into (maybe) black hole cores; the thermal emission from the collapse isn't nuclear fusion, so I don't know if those are "stars". But it's very early light.

Honestly, every time a new class of telescope is built, it discovers fundamentally new phenomena.

https://duckduckgo.com/?q=LOFAR+square+kilometer+array+reion...

https://news.ycombinator.com/item?id=44739618

https://news.ycombinator.com/item?id=46938217

I searched "Reionization" and "Cosmic Dawn" plus some favorite telescopes via web and here using the Hacker News search (Agolia).

(Certainly you know the difference between radio and infrared, but I had to look into how those choices of telescope have observed different aspects of Reionization Era, got nerd-sniped, and just had to write it down in a couple of sentences.)


It's safe to say that if we are sticking a 6-ton 20ft mirror into space that the scientists probably have a reason for it...

Lower frequencies are microwaves and radio waves. We already have the square kilometer array.

because infrared is the hardest to observe from the ground. Hot objects glow, and the sky is at the temperature where it glows infrared.

"just an infrared telescope"

how about you go make yourself conversant with "just" the technical requirements of the main cryogenic pump onboard, leaving out the rest of the thermal management systems for whatever remains of your life, which will have to be long in order to fail honorably.


Sorry, I didn't mean it's easy to build, far from it :). I meant "just infrared" in terms of frequency — why not go further? Is there a gap between the current infrared and radio on Earth?

Wavelength for electromagnetic waves = c/frequency.

So to 'catch' a certain frequency with a receiver the size of the receiver gets proportionally larger as the frequency drops. Focusing light can be done with relatively small gear. Focusing radio waves, especially when the source is distant requires a massive structure and to keep that structure sufficiently cool and structurally rigid is a major challenge. It is already a challenge for the JWST at the current wavelengths, increasing the wavelength while maintaining the sensitivity would create some fairly massive complications.

In the end this is a matter of funding, and JWST already nearly got axed multiple times due to its expense.


I am poking fun (at your expense) at the notion that because the light is already there, adding other sensors would be feasable. Once you grasp the requirements of building an infrared telescope, you will be going, oh!, damn, wow! It's actualy not that deep a dive to get a feel for just how special the JWST is from an engineering perspective, and then a look into just how difficult it will be to get visible light from those distances, which may require a interferometric telescope with multiple huge sub units flying in formation at distances, known to a fraction of the target wave length , but perhaps several hundred thousand km, apart. doable, but :), just

The temperature gradient across that thing is mindblowing.

I've used a different approach to this: there's no real need to modify the compiled binary code because Go compiles everything from source, so you can patch the functions at the source level instead: https://github.com/YuriyNasretdinov/golang-soft-mocks

The way it works is that at the start of every function it adds an if statement that atomically checks whether or not the function has been intercepted, and if it did, then executes the replacement function instead. This also addresses the inlining issue.

My tool no longer works since it was rewriting GOPATH, and Go since effectively switched to Go Modules, but if you're persistent enough you can make it work with Go modules too — all you need to do is rewrite the Go module cache instead of GOPATH and you're good to go.


Go defines structs the same way C does, so it's already encouraging thinking about and optimising the physical data layout. It also recently added experimental support for SIMD intristics: https://go.dev/doc/go1.26#simd . Nothing on GPU side yet though, but I wouldn't be surprised to see it there eventually too :)

Yes, I know Go's structs are similar to C in terms of syntax, but does the Go compiler guarantee the same bitwise layout for its data structures? Most GC languages add metadata to the data structures to track GC status, and this changes both the memory layout and the word alignment, which then sometimes forces the language to add extra padding to maintain alignment. And this nests as you put one struct inside another, or an array inside a struct.

Now you have "fat arrays" and "fat structs", so instead of grabbing a pointer and loading the next 128 bits into memory and doing an operation, you have to grab the pointer, read out data from individual elements, combine them, create a new element with the combined data, and then you have a 128 bits. But even then, you don't know whether you have 128 bits or not. Some gc-specific metadata might have been added by the compiler (and probably was).

Bottom line, it's very hard in the GC world to have bit-wise control over memory layout, even if user-level syntax of "structs" is the same. And one consequence of that is that you can't just "do" SIMD in Go. You have to wait for Go to expose a library that does this for you, and you will always be limited by what types of unpacking/repacking the language designers allowed you to do.

Or, you are stuck with hoping the compiler is very smart, which is never the case and requires huge compile times for marginal gains in compiler smarts.

So it's not about GC collection pauses so much as no longer having access to memory layouts.


> does the Go compiler guarantee the same bitwise layout for its data structures

It probably won't be fully 1:1 with C, but it's good enough that you can write code like this and it works: https://github.com/fsnotify/fsnotify/blob/main/backend_inoti... (unix.InotifyEvent is just a Go struct: https://pkg.go.dev/golang.org/x/sys/unix#InotifyEvent)

> Now you have "fat arrays" and "fat structs", so instead of grabbing a pointer and loading the next 128 bits into memory and doing an operation, you have to grab the pointer, read out data from individual elements, combine them, create a new element with the combined data, and then you have a 128 bits.

That is not how it works, you get real pointers that you can even do math with using unsafe package.

> Most GC languages add metadata to the data structures to track GC status, and this changes both the memory layout and the word alignment, which then sometimes forces the language to add extra padding to maintain alignment

Go GC uses a separate memory region to track GC metadata. It does not embed this information into structs, arrays, etc, directly.

> And one consequence of that is that you can't just "do" SIMD in Go. You have to wait for Go to expose a library that does this for you, and you will always be limited by what types of unpacking/repacking the language designers allowed you to do.

You very much could, thanks to what I described above. You'll have to write assembly (Go supports assembly), and it's even used in some e.g. crypto libraries not just for performance reasons, but to ensure constany-time operation too.

The downside of using assembly is that it doesn't support inlining, and there's a small shim to keep ABI backwards compatible with the original way functions were called (using stack, whereas newer ABI uses registers). So you need to write loops in assembly too to eliminate the function call overhead. The SIMD package solves this issue by allowing code inlining.


> Go GC uses a separate memory region to track GC metadata. It does not embed this information into structs, arrays, etc, directly.

I didn't know this, thank you. That's a solid approach.


Ever since iOS introduced "reduce interruptions" mode I've been using it ever since and it's really great. It's not as customisable as this app, but I still highly recommend anything like this for those who're tired of notifications spam

Yeah I was a bit surprised by this too. I think the post was written around 10 years ago, when it still was a genuine problem in Go.

Good eye. This is why HN titles have year tags. :-)

Yeah, this is from 2016. I don't think choosing C over C++ was defensible even back then, but the critique of Go makes more sense now.

https://web.archive.org/web/20160109171250/http://jonathanwh...


Civ V definitely solved the issue by separating unit strength and their HP. Not sure about Civ 4, but I think it applies there too.

Well, I must say ChatGPT felt much more stable than Meena when I first tried it. But, as you said, it was a few years before ChatGPT was publicly announced :)

I think the important part here is "from scratch". Typically when you're designing a new (second, third, whatever) system to replace the old one you actually take the good and the bad parts of the previous design into account, so it's no longer from scratch. That's what allows it to succeed (at least in my experience it usually did).

These days software has been done a lot. You should be able to find others who have done similar things and learn lessons from them. Considering microservices - there are lots of people who have done them and can tell you what worked well and what didn't. Considering using QT - lots of others have and can give you ideas. Considering writing your own framework - there are lots of others: look at what they do good and bad.

If you are doing a CRUD web app for a local small business - there are thousands of examples. If you are writing control software for a space station - you may not have access to code from NASA/Russia/China but you can at least look at generic software that does the things you need and learn some lessons.


It's always November, isn't it? I've once made a log collection system that had a map of month names to months (had to create it because Go date package didn't support that specific abbreviation for month names).

As you might've guessed, it lacked November, but no one noticed for 4+ months, and I've left the company since. It created a local meme #nolognovember and even got to the public (it was in Russia: https://pikabu.ru/story/no_log_november_10441606)


> This might be on me and my lack of capacity to exploit the tools to their full extent. But, iterating over customer requirements, CI/CD, peer reviews, and business validation takes time (and time from the most experienced people, not from the AI).

Yeah, you're certainly not the only one. For me the implementation part has always been a breeze compared to all the "communication overhead" so to speak. And in any mature system it easily takes 90% of all time or more.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: