• FlatFootFox@lemmy.world
    link
    fedilink
    English
    arrow-up
    336
    ·
    7 months ago

    I still cannot believe NASA managed to re-establish a connection with Voyager 1.

    That scene from The Martian where JPL had a hardware copy of Pathfinder on Earth? That’s not apocryphal. NASA keeps a lot of engineering models around for a variety of purposes including this sort of hardware troubleshooting.

    It’s a practice they started after Voyager. They shot that patch off into space based off of old documentation, blueprints, and internal memos.

    • nxdefiant@startrek.website
      link
      fedilink
      English
      arrow-up
      183
      arrow-down
      1
      ·
      7 months ago

      Imagine scrolling back in the Slack chat 50 years to find that one thing someone said about how the chip bypass worked.

        • jaybone@lemmy.world
          link
          fedilink
          English
          arrow-up
          56
          arrow-down
          1
          ·
          7 months ago

          This is why slack is bullshit. And discord. We should all go back to email. It can be stored and archived and organized and get off my lawn.

          • Artyom@lemm.ee
            link
            fedilink
            English
            arrow-up
            18
            ·
            7 months ago

            It’s not Slack’s fault. It is a good platform for one-off messages. Need a useless bureaucratic form signed? Slack. Need your boss to okay the afternoon off? Slack. Need to ask your lead programmer which data structure you should use and why they’re set up that way? Sounds like the answer should be put in a wiki page, not slack.

            All workflows are small components of a larger workplace. Emails also suck for a lot of things. They probably wouldn’t have worked in this case, memos are the logical upgrade from emails where you want to make sure everyone receives it and the topic is not up for further discussion.

            • ohwhatfollyisman@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              6 months ago

              memos are the logical upgrade from emails where you want to make sure everyone receives it

              uh, email is memos? email is so memos that ibm’s proprietary email management solution Lotus Notes calls the transaction “create memo” where outlook calls it “new message”.

              and the topic is not up for further discussion.

              bit rude, imo.

            • jaybone@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              4
              ·
              7 months ago

              Sorry, email is still better for all of those things. Except the wiki page, of course.

          • deweydecibel@lemmy.world
            link
            fedilink
            English
            arrow-up
            18
            ·
            7 months ago

            I mean, unironically, yeah.

            It’s not even that we need to go back to email. The problem isn’t moving on from outdated forms of communication, it’s that the technology being pushed as a replacement for it is throwing out the baby with the bathwater.

            Which is to say nothing of the fact that all of these new platforms are proprietary, walled off, and in some cases don’t make controlling the data easy if you’re not hosting it (and their searches are trash).

            • sudo42@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              ·
              edit-2
              7 months ago

              all of these new platforms are proprietary, walled off, and in some cases don’t make controlling the data easy if you’re not hosting it

              You’ve just discovered their business case. So many new businesses these days only insinuate themselves into an existing process in order to co-opt it and charge rents.

            • ferret@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              6
              ·
              7 months ago
              1. Don’t use google as your email provider
              2. Keep backups of your email (you can do this on gmail, too)
          • xantoxis@lemmy.world
            link
            fedilink
            English
            arrow-up
            23
            arrow-down
            3
            ·
            7 months ago

            Yeah. Technically I’m not talking about Microsoft, as their primary product is the OS and they are not purely Internet-based. IBM, of course, is much older than that and also has some Internet products, as does every software company.

            In my statement “Internet company” means a company whose only product is SaaS on the Internet; i.e. someone who, if they went away, their product would disappear with them.

            • Guy_Fieris_Hair@lemmy.world
              link
              fedilink
              English
              arrow-up
              10
              ·
              edit-2
              7 months ago

              I guess it is hard to imagine an internet company lasting that long mostly because the hasn’t been around that long, it’s only been 31 years since it went public. A year later Amazon was formed. I would bet money Amazon and Google easily make it to 50. Along with many many others. A small, not overly commercialized company like slack would be crazy. I wouldn’t be surprised if they get gobbled up by a mega Corp as the enshitification continues.

              • xantoxis@lemmy.world
                link
                fedilink
                English
                arrow-up
                10
                ·
                7 months ago

                Google is actually the sine qua non of what I’m talking about. I’ll concede that it’s possible Google as a corporate entity will still exist in 2048 (it was founded in 1998). But Google has undergone such a drastic and dystopian management change that it’s almost not even the same company now

                –but that isn’t relevant to what I’m actually talking about, which is the products. The proposition that Slack logs would still be around 50 years from now was what catalyzed my quip. Google kills everything it makes, usually quickly. Will we be able to look at Google Reader logs in 2048? Or–even closer to the target–Google Wave logs? Google Podcasts? Google Stadia? (I could go on.)

                At the end of the day it was just a quip, but I fully expect the SaaS companies you currently think of as indestructible titans to be on the dustheap of history in 20 years, let alone 50.

                • Guy_Fieris_Hair@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  7 months ago

                  I don’t think the actual logs on slack will go away. Just maybe hosted on a different server owned by a different corporation.

              • I Cast Fist@programming.dev
                link
                fedilink
                English
                arrow-up
                1
                ·
                7 months ago

                Match group (owners of nearly every dating site and app) are very likely to endure 50 years, and they are, afaik, 100% internet company, plug it off and they disappear without a trace

          • imgcat@lemmy.ml
            link
            fedilink
            English
            arrow-up
            3
            ·
            7 months ago

            And most microsoft products surely can run 50 years with no glitches.

          • MrSpArkle@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            7 months ago

            They were a software for decades before they became an “internet company”.

        • nxdefiant@startrek.website
          link
          fedilink
          English
          arrow-up
          6
          ·
          7 months ago

          IBM is 100, but the Internet didn’t exist in 1924, so we’ll say the clock starts in 1989. I’m pretty sure at least MS or IBM will be around in 15 years.

    • ricecake@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      36
      ·
      7 months ago

      To add to the metal, the blueprints include the blueprints for the processor.

      https://hackaday.com/2024/05/06/the-computers-of-voyager/

      They don’t use a microprocessor like anything today would, but a pile of chips that provide things like logic gates and counters. A grown up version of https://gigatron.io/

      That means “written in assembly” means “written in a bespoke assembly dialect that we maybe didn’t document very well, or the hardware it ran on, which was also bespoke”.

    • BearOfaTime@lemm.ee
      link
      fedilink
      English
      arrow-up
      26
      ·
      7 months ago

      I realize the Voyager project may not be super well funded today (how is it funded, just general NASA funds now?), just wondering what they have hardware-wise (or ever had). Certainly the Voyager system had to have precursors (versions)?

      Or do they have a simulator of it today - we’re talking about early 70’s hardware, should be fairly straightforward to replicate in software? Perhaps some independent geeks have done this for fun? (I’ve read of some old hardware such as 8088 being replicated in software because some geeks just like doing things like that).

      I have no idea how NASA functions with old projects like this, and I’m surely not saying I have better ideas - they’ve probably thought of a million more ways to validate what they’re doing.

        • wewbull@feddit.uk
          link
          fedilink
          English
          arrow-up
          15
          ·
          7 months ago

          You sure? The smell off some of the corpses will have been terrible.

          I’m not saying they’re all dead, but an intern at the time of launch would now be 70. Anybody who actually designed anything is… Well… The odds of them still being around are low.

          • Flummoxed@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            ·
            7 months ago

            I have a uncle who worked on Apollo writing machine code, and he is a spry, clear-headed 80-something-year-old.

      • FlatFootFox@lemmy.world
        link
        fedilink
        English
        arrow-up
        17
        ·
        7 months ago

        The Hard Fork podcast had a pretty good episode recently where they interviewed one of the engineers on the project. They’d troubleshooted the spacecraft enough in the past that they weren’t starting from square one, but it still sounded pretty difficult.

      • SpaceNoodle@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        7 months ago

        They apparently didn’t have an emulator. The first thing I’d have done when working on a solution would have been to build one, but they seem to have pulled it off without.

      • Baggie@lemmy.zip
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        2
        ·
        7 months ago

        100% they’ve got an emulator, they’ve had dedicated test environments since the moon landing for emulating disaster recovery scenarios since the moon landings, they’ve likely got at least one functioning hardware replica and very likely can spin up a hardware emulation as a virtual machine at will.

        Source: I made this up, but I have a good understanding of systems admin and have a interest in space stuff so I’m pretty confident they would have this stuff at bare minimum

        • BearOfaTime@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 months ago

          That’s my assumption too, but we’re talking about a different era, and I really have no idea how they approached validation and test/troubleshooting.

          I’ve seen some test environments for manned missions, but that’s really for humans to validate what they’re doing.

          V’ger was quick 'n dirty by comparison (with no criticism of the process or folks involved…they had one chance to get these missions out there).

  • merc@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    250
    ·
    7 months ago

    To me, the physics of the situation makes this all the more impressive.

    Voyager has a 23 watt radio. That’s about 10x as much power as a cell phone’s radio, but it’s still small. Voyager is so far away it takes 22.5 hours for the signal to get to earth traveling at light speed. This is a radio beam, not a laser, but it’s extraordinarily tight beam for a radio, with the focus only 0.5 degrees wide, but that means it’s still 1000x wider than the earth when it arrives. It’s being received by some of the biggest antennas ever made, but they’re still only 70m wide, so each one only receives a tiny fraction of the power the power transmitted. So, they’re decoding a signal that’s 10^-18 watts.

    So, not only are you debugging a system created half a century ago without being able to see or touch it, you’re doing it with a 2-day delay to see what your changes do, and using the most absurdly powerful radios just to send signals.

    The computer side of things is also even more impressive than this makes it sound. A memory chip failed. On Earth, you’d probably try to figure that out by physically looking at the hardware, and then probing it with a multimeter or an oscilloscope or something. They couldn’t do that. They had to debug it by watching the program as it ran and as it tried to use this faulty memory chip and failed in interesting ways. They could interact with it, but only on a 2 day delay. They also had to know that any wrong move and the little control they had over it could fail and it would be fully dead.

    So, a malfunctioning computer that you can only interact with at 40 bits per second, that takes 2 full days between every send and receive, that has flaky hardware and was designed more than 50 years ago.

    • flerp@lemm.ee
      link
      fedilink
      English
      arrow-up
      93
      arrow-down
      7
      ·
      7 months ago

      And you explained all of that WITHOUT THE OBNOXIOUS GODDAMNS and FUCKIN SCIENCE AMIRITEs

      • KubeRoot@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        20
        arrow-down
        1
        ·
        7 months ago

        Oh screw that, that’s an emotional post from somebody sharing their reaction, and I’m fucking STOKED to hear about it, can’t believe I missed the news!

    • chimasterflex@lemmy.world
      link
      fedilink
      English
      arrow-up
      66
      ·
      7 months ago

      Finally I can put some take into this. I’ve worked in memory testing for years and I’ll tell you that it’s actually pretty expected for a memory cell to fail after some time. So much so that what we typically do is build in redundancy into the memory cells. We add more memory cells than we might activate at any given time. When shit goes awry, we can reprogram the memory controller to remap the used memory cells so that the bad cells are mapped out and unused ones are mapped in. We don’t probe memory cells typically unless we’re doing some type of in depth failure analysis. usually we just run a series of algorithms that test each cell and identify which ones aren’t responding correctly, then map those out.

      None of this is to diminish the engineering challenges that they faced, just to help give an appreciation for the technical mechanisms we’ve improved over the last few decades

      • trolololol@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        edit-2
        7 months ago

        pretty expected for a memory cell to fail after some time

        50 years is plenty of time for the first memory chip to fail most systems would face total failure by multiple defects in half the time WITH physical maintenance.

        Also remember it was built with tools from the 70s. Which is probably an advantage, given everything else is still going

        • orangeboats@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          7 months ago

          Also remember it was built with tools from the 70s. Which is probably an advantage

          Definitely an advantage. Even without planned obsolescence the olden electronics are pretty tolerant of any outside interference compared to the modern ones.

      • merc@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 months ago

        what we typically do is build in redundancy into the memory cells

        Do you know how long that has been going on? Because Voyager is pretty old hardware.

    • graymess@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      7 months ago

      Is there a Voyager 1, uh…emulator or something? Like something NASA would use to test the new programming on before hitting send?

      • Landless2029@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 months ago

        Today you would have a physical duplicate of something in orbit to test code changes on before you push code to something in orbit.

    • uis@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      7 months ago

      They have spare Voyager on Earth for debugging

      EDIT: or not

      • bstix@feddit.dk
        link
        fedilink
        English
        arrow-up
        43
        ·
        7 months ago

        Absolutely. The computers on Voyager hold the record for being the longest continuously running computer of all time.

    • IchNichtenLichten@lemmy.world
      link
      fedilink
      English
      arrow-up
      49
      arrow-down
      1
      ·
      7 months ago

      Microsoft can’t even release a fix for Window’s recovery partition being too small to stage updates. I had to do it myself, fucking amateurs.

      • bstix@feddit.dk
        link
        fedilink
        English
        arrow-up
        12
        ·
        edit-2
        7 months ago

        Can’t or won’t? The same issue exists for both windows 10 and 11, but they haven’t closed the ticket for windows 11… Typical bullshit. It’s not exactly planned obsolescence, but when a bug comes up like that they’re just gonna grab the opportunity to go “sry impossible, plz buy new products”

      • space@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        5
        ·
        7 months ago

        Not to mention what a bitch that partition is when you need to shrink or increase the size of your windows partition. If you need to upgrade your storage, or resize to partition to make room for other operating systems, you have to follow like 20 steps of voodoo magic commands to do it.

        • BeardedGingerWonder@feddit.uk
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          Whoa learned that one at the weekend. Added a new nvme drive, cloned the old drive. I wanted to expand my linux partition, but it was at the start of the drive. So shifted all the windows stuff to the end and grew the Linux partition.

          Thought I’d boot into windows to make sure it was okay, just in case (even though I’ve apparently not booted it in 3 years). BSOD. 2-3hrs later it was working again, I’m still not sure what fixed it of I’m honest, I seemed to just rerun the same bootrec commands and repair startup multiple times, but it works now, so yay!

            • BeardedGingerWonder@feddit.uk
              link
              fedilink
              English
              arrow-up
              1
              ·
              7 months ago

              Jeez, I’ve just looked at the list of utilities, I’m not surprised, it’s got FireWire drivers for dos included. You’ve got to be pretty deep into the weeds at the point you need FireWire support in DOS from a recovery disk!

        • ℛ𝒶𝓋ℯ𝓃
          link
          fedilink
          English
          arrow-up
          34
          ·
          edit-2
          7 months ago

          Windows 13 update log:

          Change kernel to Linux.

          Build custom OS for astrophysics and space science applications.

          happy rocket engineer noises

          • jnk@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            ·
            7 months ago

            Now I’m curious. How would a NasaOS look like? Would it even be good for general use? Would they just focus on optimization? Could they finally beat Hannah Montana linux, the superior OS?

            • anton@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              2
              ·
              7 months ago

              I think it would have a real time kernel running parallel to a linux kernel.
              Users could interact with the linux kernel normally and schedule trusted real time tasks on the other. Maybe there is reduced security for added performance on those cores.

              In general use it would be a normal stable system with the allure of a performance mode that will break your system if you are not careful.

      • Aux@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        5
        ·
        7 months ago

        Well, they only had to test it for a single hardware deployment. Windows has to be tested for millions if not billions of deployments. Say what you want, but Microsoft testers are god like.

  • Rob@lemmy.world
    link
    fedilink
    English
    arrow-up
    134
    ·
    7 months ago

    Interviewer: Tell me an interesting debugging story

    Interviewee: …

    • sudo42@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      7 months ago

      Heh. Years ago during an interview I was explaining how important it is to verify a system before putting it into orbit. If one found problems in orbit, you usually can’t fix it. My interviewer said, “Why not just send up the space shuttle to fix it?”

      Well…

  • LadyAutumn@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    125
    ·
    edit-2
    7 months ago

    It’s hard to explain how significant the Voyager 1 probe is in terms of human history. Scientists knew as they were building it that they were making something that would have a significant impact on humanity. It’s the first man made object to leave the heliosphere and properly enter the interstellar medium, and this was always just a secondary goal of the probe. It was primarily intended to explore the gas giants, especially the Jovian lunar system. It did its job perfectly and gave us so many scientific discoveries just within our solar system.

    And I think there’s something sobering about the image of it going on a long, endless road trip into the galactic ether with no destination. It’s a pretty amazing way to retire. The fact that even today we get scientific data from Voyager, that so far away we can still communicate with it and control it, is an unbelievable achievement of human ingenuity and scientific progress. If you’ve never seen the image the Pale Blue Dot you should see it. That linked picture is a revised version of the image made by Nasa and released in 2020. It’s part of a group of the last pictures ever taken by Voyager 1 on February 14th 1990, a picture of Earth from 6 billion kilometers away. It’s one of my favorite pictures, and it kinda blows my mind every time I see it.

    • SoleInvictus@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      55
      ·
      7 months ago

      The pale blue dot photo always makes me tear up. We’re so small and insignificant in such a grand universe and I’m crushed that I can’t explore it.

      • Dyskolos@lemmy.zip
        link
        fedilink
        English
        arrow-up
        45
        ·
        7 months ago

        There will always be a “step further we’d love to see but won’t”. Let’s be glad we’re in that step which included this photo and the inherent magnificence in it.

        It totally beats being one of the earlier humans who just wondered what the lights in the sky might be. Probably gods or something.

        • MIDItheKID@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          7 months ago

          There will always be a “step further we’d love to see but won’t”

          I dunno, it could be really bad out there. We like to have really romanticized versions of space exploration in our brain. Like finding I habitable planets and other intelligent life. But what if that other intelligent life is super far advanced, and also capitalists. And they figured out how to inject advertisements into brains. And they want to share their technology with us.

  • xantoxis@lemmy.world
    link
    fedilink
    English
    arrow-up
    111
    ·
    7 months ago

    I think the term “metal” is overused, but this is probably the most metal thing a programmer could possibly do besides join a metal band.

  • ristoril_zip@lemmy.zip
    link
    fedilink
    English
    arrow-up
    90
    ·
    7 months ago

    Keep in mind too these guys are writing and reading in like assembly or some precursor to it.

    I can only imagine the number of checks and rechecks they probably go through before they press the “send” button. Especially now.

    This is nothing like my loosey goosey programming where I just hit compile or download and just wait to see if my change works the way I expect…

    • KillingTimeItself@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      80
      ·
      7 months ago

      they almost certainly have a hardware spare, or at the very least, an accurately simulated version of it, because again, this is 50 year old hardware. So it’s pretty easy to just simulate it.

      But yeah they are almost certainly pulling some really fucked QA on this shit.

    • Inktvip@lemm.ee
      link
      fedilink
      English
      arrow-up
      45
      ·
      7 months ago

      As someone who recently switched from AWS to Azure I feel your pain.

      Best part is when you finally have a working solution, Microsoft sends you an email that it’s being deprecated.

        • Inktvip@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          7 months ago

          Oh I switched jobs, so not switch as in migrate.

          The industry I work in now is very conservative, so Microsoft is a brand people know and “trust”. Amazon is scary and new.

    • theangryseal@lemmy.world
      link
      fedilink
      English
      arrow-up
      25
      ·
      7 months ago

      As a teenager I experienced a power outage while I was updating my bios.

      Guess what happened?

      I’m still bitter about it.

      • Karyoplasma@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        16
        ·
        7 months ago

        You can negate that risk by getting a UPS. You should get a UPS in any case imo since even a shitty one lets you at least save your work and shutdown properly if your electricity drops.

    • Raxiel@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      7 months ago

      I updated mine a couple of weeks ago. I was actually really anxious as It went through the process, but it worked fine, at first…
      Then I found out Microsoft considered it a new computer and deactivated windows. (And that’s when I found out they deleted upgrade licences from windows 7 & 8 back in September)

  • Nougat@fedia.io
    link
    fedilink
    arrow-up
    58
    arrow-down
    5
    ·
    7 months ago

    My understanding is that they sent V’Ger a command to do “something,” and then the gibberish it was sending changed, and that was the “here’s everything” signal.

    And yeah, I’m calling it V’Ger from now on.

  • ikidd@lemmy.world
    link
    fedilink
    English
    arrow-up
    50
    ·
    7 months ago

    When I hear what they did, I was blown away. A 50 year old computer (that was probably designed a decade before launching) and the geniuses that built that put in the facility to completely reprogram it a light-day away.