• Redkey@programming.dev
    link
    fedilink
    arrow-up
    8
    ·
    2 days ago

    A couple of other commenters have given excellent answers already.

    But on the topic in general I think that the more you learn about the history of computing hardware and programming, the more you realise that each successive layer added between the relays/tubes/transistors and the programmer was mostly just to reduce boilerplate coding overhead. The microcode in integrated CPUs took care of routing your inputs and outputs to where they need to be, and triggering the various arithmetic operations as desired. Assemblers calculated addresses and relative jumps for you so you could use human-readable labels and worry less that a random edit to your code would break something because it was moved.

    More complex low-level languages took care of the little dances that needed to be performed in order to do more involved operations with the limited number of CPU registers available, such as advanced conditional branching and maintaining the illusion of variables. Higher-level languages freed the programmer from having to keep such careful tabs on their own memory usage, and helped to improve maintainability by managing abstract data and code structures.

    But ignoring the massive improvements in storage capacity and execution speed, today’s programming environments don’t really do anything that couldn’t have been implemented with those ancient systems, given enough effort and patience. It’s all still just moving numbers around and basic arithmetic and logic. But a whole lot of it, really, really fast.

    The power of modern programming environments lies in how they allow us to properly implement and maintain a staggering amount of complex minutiae with relative ease. Such ease, in fact, that sometimes we even forget that the minutiae are there at all.

    • Mikina@programming.dev
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      20 hours ago

      To add to this excelent answer, one thing that made me really understand and realize quite a lot about how do CPUs actually work, and why is most of the stuff the way it is, was playing through the amazing “Turing Complete” puzzle game.

      The premise is simple - you start with basic AND/OR/NOT gates, and slowly build up stuff. You make a NAND, and then can use your design. Then you make a counter, and can use that. The one bit memory. An adder. A multiplexer. All using the component designs you have already done before.

      Eventually, you build up to ALU and RAM, until you end up with a working CPU. Later levels even add creating your instruction sets and assembly language, but I never really got far into that part.

      It’s a great combination of being a puzzle game - you have clear goals, and everything is pretty approachable and very well paced. I had no idea how is memory done on the circuit level, but the game made me figure it out, or had hints when I got stuck.

      And seeing a working CPU that you’ve designed from scratch is pretty cool, but most importantly - even though I’ve had courses on hardware, CPU architecture and the like on college, there’s a lot of stuff I kind of understood, but it never really clicked. This game has helped tremendously in that regard, and it was full of “aha moments” finally connecting a lot of what I know about low-level computing.

      I’m not even into puzzle games that much, but this was just a joy to play. It was so fun I sat through it in one session, up until I got to a complete CPU. I very highly recommend it to anyone.

    • drosophila@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 day ago

      The microcode in integrated CPUs took care of routing your inputs and outputs to where they need to be, and triggering the various arithmetic operations as desired.

      In the transition from plugboards to programmed sequence control the thing that took over the task of routing values between registers, through the ALU, and to/from IO ports was the control unit. Microcode being one way to implement functionality in the control unit.

      One other approach was to use what was basically a finite state machine, implemented physically in-circuit. The output of that FSM was fed into a series of logic gates along with the current instruction value, with the output of that combination being connected to the control lines of the various CPU elements. Thus the desired switching/routing behavior occured.

      Modern chips are really complicated hybrids of microcode and a ton of interacting finite state machines. Especially in x86 complex or less commonly used instructions will be implemented in microcode, whereas simple/common instructions will be implemented by being “hardwired”, somewhat similar to the FSM technique described above (although probably more complicated).