It’s not the 1st time a language/tool will be lost to the annals of the job market, eg VB6 or FoxPro. Though previously all such cases used to happen gradually, giving most people enough time to adapt to the changes.

I wonder what’s it going to be like this time now that the machine, w/ the help of humans of course, can accomplish an otherwise multi-month risky corporate project much faster? What happens to all those COBOL developer jobs?

Pray share your thoughts, esp if you’re a COBOL professional and have more context around the implication of this announcement 🙏

  • simple@lemm.ee
    link
    fedilink
    arrow-up
    90
    arrow-down
    1
    ·
    1 year ago

    I have my doubts that this works well, every LLM we’ve seen that translates/writes code often makes mistakes and outputs garbage.

  • IHeartBadCode@kbin.social
    link
    fedilink
    arrow-up
    56
    ·
    1 year ago

    This sounds no different than the static analysis tools we’ve had for COBOL for some time now.

    The problem isn’t a conversion of what may or may not be complex code, it’s taking the time to prove out a new solution.

    I can take any old service program on one of our IBM i machines and convert it out to Java no problem. The issue arises if some other subsystem that relies on that gets stalled out because the activation group is transient and spin up of the JVM is the stalling part.

    Now suddenly, I need named activation and that means I need to take lifetimes into account. Static values are now suddenly living between requests when procedures don’t initial them. And all of that is a great way to start leaking data all over the place. And when you suddenly start putting other people’s phone numbers on 15 year contracts that have serious legal ramifications, legal doesn’t tend to like that.

    It isn’t just enough to convert COBOL 1:1 to Java. You have to have an understanding of what the program is trying to get done. And just looking at the code isn’t going to make that obvious. Another example, this module locks a data area down because we need this other module to hit an error condition. The restart condition for the module reloads it into a different mode that’s appropriate for the process which sends a message to the guest module to unlock the data area.

    Yes, I shit you not. There is a program out there doing critical work where the expected execution path is to on purpose cause an error so that some part of code in the recovery gets ran. How many of you think an AI is going to pick up that context?

    The tools back then were limited and so programmers did all kinds of hacky things to get particular things done. We’ve got tools now to fix that, just that so much has already been layered on top of the way things work right now. Pair with the whole, we cannot buy a second machine to build a new system and any new program must work 99.999% right out of the gate.

    COBOL is just a language, it’s not the biggest problem. The biggest problem is the expectation. These systems run absolutely critical functions that just simply cannot fail. Trying to foray into Java or whatever language means we have to build a system that doesn’t have 45 years worth of testing that runs perfectly. It’s just not a realistic expectation.

    • aksdb@feddit.de
      link
      fedilink
      arrow-up
      19
      ·
      1 year ago

      What pisses me off about many such endeavors is, that these companies always want big-bang solutions, which are excessively hard to plan out due to the complexity of these systems, so it’s hard to put a financial number on the project and they typically end up with hundreds of people involved during “planning” just to be sacked before any meaningful progress could be made.

      Instead they could simply take the engineers they need for maintenance anyway, and give them the freedom to rework the system in the time they are assigned to the project. Those systems are - in my opinion - basically microservice systems. Thousands of more or less small modules inter-connected by JCL scripts and batch processes. So instead of doing it big bang, you could tackle module by module. The module doesn’t care in what language the other side is written in, as long as it still is able to work with the same datastructure(s).

      Pick a module, understand it, write tests if they are missing, and then rewrite it.

      After some years of doing that, all modules will be in a modern language (Java, Go, Rust, whatever) and you will have test coverage and hopefully even documentation. Then you can start refactoring the architecture.

      But I guess that would be too easy and not enterprisy enough.

        • aksdb@feddit.de
          link
          fedilink
          arrow-up
          9
          ·
          1 year ago

          I said it takes years. The point is that you can do it incremental. But that typically doesn’t fit with the way enterprises want things done. They want to know a beginning, a timeline and a price. Since they don’t get that, they simply give up.

          But it’s dumb, since those systems run already and have to keep running. So they need to keep engineers around that know these systems anyway. Since maintenance work likely doesn’t take up their time, they could “easily” hit two birds with one stone. The engineers have a fulltime job on the legacy system (keeping them in the loop for when an incident happens without having to pull them out of other projects then and forcing them into a context switch) and you slowly get to a modernized system.

          Not doing anything doesn’t improve their situation and the system doesn’t get any less complex over time.

      • gedhrel@lemmy.ml
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        I think you vastly overestimate the separability of these systems.

        Picture 10,000 lines of code in one method, with a history of multiple decades.

        Now picture that that method has buried in it, complex interactions with another method of similar size, which is triggered via an obscure side-effect.

        Picture whole teams of developers adding to this on a daily basis in realtime.

        There is no “meaningful progress” to be made here. It may offend your aesthetic sense, but it’s just the reality of doing business.

        • aksdb@feddit.de
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          What’s the alternative in your opinion?

          Not doing anything and keep fiddling around in this mess for the next 20 years?

          Continue trying to capture this problem big-bang, which means not only dealing with one such unmaintainable module but all of them at once?

          Will every module be a piece of cake? Hell no. But if you never start anywhere, it doesn’t get better on its own.

          • gedhrel@lemmy.ml
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            1 year ago

            The alternative is to continue with a process that’s been demonstrably successful, despite it offending your sensibilities.

            Banks are prepared to pay for it. People are prepared to do it. It meets the business needs. Change is massively high-risk in a hugely conservative industry.

    • Kerfuffle@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      This sounds no different than the static analysis tools we’ve had for COBOL for some time now.

      One difference is people might kind of understand how the static analysis tools we’ve had for some time now actually work. LLMs are basically a black box. You also can’t easily debug/fix a specific problem. The LLM produces wrong code in one particular case, what do you do? You can try performing fine tuning training with examples of the problem and what it should be but there’s no guarantee that won’t just change other stuff subtly and add a new issue for you to discovered at a future time.

  • eyy@lemm.ee
    link
    fedilink
    arrow-up
    37
    ·
    1 year ago

    Not a cobol professional but i know companies that have tried (and failed) to migrate from cobol to java because of the enormously high stakes involved (usually financial).

    LLMs can speed up the process, but ultimately nobody is going to just say “yes, let’s accept all suggested changes the LLM makes”. The risk appetite of companies won’t change because of LLMs.

    • Kache@lemm.ee
      link
      fedilink
      arrow-up
      9
      ·
      1 year ago

      Wonder what makes it so difficult. “Cobol to Java” doesn’t sound like an impossible task since transpilers exist. Maybe they can’t get similar performance characteristics in the auto-transpiled code?

      • qaz@lemmy.world
        link
        fedilink
        arrow-up
        16
        ·
        edit-2
        1 year ago

        COBOL programs are structured very differently from Java. For example; you can’t just declare a variable, you have to add it to the working storage section at the top of the program.

        • Kache@lemm.ee
          link
          fedilink
          arrow-up
          5
          ·
          edit-2
          1 year ago

          That example doesn’t sound particularly difficult. I’m not saying it’d be trivial, but it should be approximately as difficult as writing a compiler. Seems like the real problem is not a technical one.

          • eyy@lemm.ee
            link
            fedilink
            arrow-up
            5
            ·
            edit-2
            1 year ago

            It’s never been a technical reason, it’s the fact that most systems still running on COBOL are live, can’t be easily paused, and there’s an extremely high risk of enormous consequences for failure. Banks are a great example of this - hundreds of thousands of transactions per hour (or more), you can’t easily create a backup because even while you’re backing up more business logic and more records are being created, you can’t just tell people “hey we’re shutting off our system for 2 months, come back and get your money later”, and if you fuck up during the migration and rectify it within in hour, you would have caused hundreds/thousands of people to lose some money, and god forbid there was one unlucky SOB who tried to transfer their life savings during that one hour.

            And don’t forget the testing that needs to be done - you can’t even have an undeclared variable that somehow causes an overflow error when a user with a specific attribute deposits a specific amount of money in a specific branch code when Venus and Mars are aligned on a Tuesday.

            • PuppyOSAndCoffee@lemmy.ml
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              1 year ago

              What.

              Most cobol systems have more code that doesn’t do anything vs code that actually does something.

              The grammar of COBOL + shit UI of mainframes means there is a shit ton of tribal anti pattern with each program. It is a pia to add fields and variables, so lazy programmers would just reuse something that wasn’t being used at that instant. Less work and more job security.

              What values do variables ROBERT1, ROBERT2 and ROBERT3 hold? Whatever ROBERT wanted.

              The reason why these things still exist is business laziness. They don’t know and don’t care what cobol is or isn’t doing.

              I am struck by … conversion to … Java? Really??? lol.

              • eyy@lemm.ee
                link
                fedilink
                arrow-up
                2
                ·
                1 year ago

                Most cobol systems have more code that doesn’t do anything vs code that actually does something.

                What values do variables ROBERT1, ROBERT2 and ROBERT3 hold? Whatever ROBERT wanted.

                And when that system is storing high-risk and/or sensitive data, do you really want to be the person who deletes code that you think “actually does nothing”, only to find out it somehow stopped another portion of code from breaking?

                The reason why these things still exist is business laziness. They don’t know and don’t care what cobol is or isn’t doing.

                That’s the thing - tor a risk-averse industry (most companies running COBOL systems belong here), being the guy who architected the move away from COBOL is a high-risk, high-stress job with little immediate rewards. At best, the move goes seamlessly, and management knows you as “the guy who updated our OS or something and saved us some money but took a few years to do it, while Bob updated our HR system and saved a bunch of money in 1 year”. At worst, you accidentally break something, and now you have a fiasco on your hands.

                • PuppyOSAndCoffee@lemmy.ml
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  1 year ago

                  Org change vs vertical integration, which is worse and why in 500 words or less ;)

                  Pay now or pay later…the organization is going to get kicked in the nuts.

                  For industries that have the option, companies who got kicked in the nuts ten yrs ago are doing better today than those who are still waiting.

                  IBM should shoulder a lot of the blame, there really is no reason why COBOL couldn’t be phased out in place except it would hurt IBM market share so it is not exactly a “thing”. In place transition to RUST should just be another section of the zOS manual.

      • DefinitelyNotAPhone [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        1 year ago

        Translating it isn’t the difficult part. It’s convincing a board room full of billionaires that they should flip the switch and risk having their entire system go down for a day because somebody missed a bug in the code and then having to explain to some combination of very angry other billionaires and very angry financial regulators why they broke the economy for the day.

        • Skiptrace@lemmy.one
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Well, I’d rather the day be sooner than later. Also, this is why you have… Backup servers and development environments. You don’t just flick the Switch randomly one day after code is made. You run months and months of simulated transactions on the new code until you get an adequate amount of bugs fixed.

          There will come a time when these old COBOL machines will just straight die, and they can’t be assed to keep making new hardware for them. And the programmers will all die out too. And then your shit out of luck. I’d rather the last few remaining COBOL programmers help translate to some other long lasting language before they all kick the bucket and not after.

          • DefinitelyNotAPhone [he/him]@hexbear.net
            link
            fedilink
            English
            arrow-up
            7
            ·
            1 year ago

            Well, I’d rather the day be sooner than later.

            Agreed, but we’re not the ones making the decision. And the people who are have two options: move forward with a risky, expensive, and potentially career-ending move with no benefits other than the system being a little more maintainable, or continuing on with business-as-usual and earning massive sums of money they can use to buy a bigger yacht next year. It’s a pretty obvious decision, and the consequences will probably fall on whoever takes over after they move on or retire, so who cares about the long term consequences?

            You run months and months of simulated transactions on the new code until you get an adequate amount of bugs fixed.

            The stakes in financial services is so much higher than typical software. If some API has 0.01% downtime or errors, nobody gives a shit. If your bank drops 1 out of every 1000 transactions, people lose their life savings. Even the most stringent of testing and staging environments don’t guarantee the level of accuracy required without truly monstrous sums of money being thrown at it, which leads us back to my point above about risk vs yachts.

            There will come a time when these old COBOL machines will just straight die, and they can’t be assed to keep making new hardware for them.

            Contrary to popular belief, most mainframes are pretty new machines. IBM is basically afloat purely because giant banks and government institutions would rather just shell out a few hundred thousand every few years for a new, better Z-frame than going through the nightmare that is a migration.

            If you’re starting to think “wow, this system is doomed to collapse under its own weight and the people in charge are actively incentivized to not do anything about it,” then you’re paying attention and should probably start extending that thought process to everything else around you on a daily basis.

          • PuppyOSAndCoffee@lemmy.ml
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            Nah, dump it all.

            COBOL programs don’t handle utf8 and other modern things like truly variable length strings.

            Best thing to do is refactor and periodically test by turning off the mainframe system to see what breaks. Why something was done is lost to the sands of time at this point.

  • halfempty@kbin.social
    link
    fedilink
    arrow-up
    35
    arrow-down
    4
    ·
    1 year ago

    That’s alot of effort to go from one horrible programming language to another horrible programming language.

    • Juja@lemmy.world
      link
      fedilink
      arrow-up
      11
      arrow-down
      2
      ·
      1 year ago

      What would your language of choice have been? And why is java horrible for this scenario? it sounds like a reasonably good choice to me

      • AttackPanda@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I’m thinking Go or Rust would be the logical next step. They probably won’t want an interpreted language so Python is out.

        • Juja@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Just curious, what about go or rust makes them the logical next choice and not java? What do go or rust do better that java doesn’t?

          • PuppyOSAndCoffee@lemmy.ml
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            1 year ago

            Java is an Oracle honey pot, a royal sustainment PIA, massive security liability, clutters up systems with its nonsense and slow as shit.

            “dear diary, despite running on a system with 1TB of RAM, a routine security patch reset the Java max memory quota and now every Java process stops after 256MB of object allocation. All four threads ran out of memory with 999GB RAM free. Thank you for this wonderful and blessed gift of computational ineptitude, amen.”

        • Taco@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          1 year ago

          JavaScript is actually really nice as a beginner programming language because of how quickly and visually you can see your results, and how easily you can debug with console output. Yeah it’s horribly unoptimized but it’s not for big things. It’s for little things. It’s baby’s first programming language.

          • PuppyOSAndCoffee@lemmy.ml
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            1 year ago

            It actually is pretty quick. Dont sleep on JavaScript capabilities. However, it is untyped. You wouldn’t want the date you wrote your check to become the amount of your check, for example.

            TypeScript does a nice job there but all in all at that point might as well go all in on a typed language.

  • 4stringscooter@lemmy.ml
    link
    fedilink
    arrow-up
    29
    ·
    1 year ago

    So the fintech companies who rely on that tested (though unliked) lump of iron from IBM running an OS, language, and architecture built to do fast, high-throughput transactional work should trust AI to turn it into Java code to run on hardware and infrastructure of their own choosing without having architected the whole migration from the ground up?

    Don’t get me wrong, I want to see the world move away from cobol and ancient big blue hardware, but there are safer ways to do this and the investment cost would likely be worth it.

    Can you tell I work in fintech?

    • PeterPoopshit@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      3
      ·
      edit-2
      1 year ago

      But then at least by the time they get it working, they’ll have enough practice to make a new llm to convert their Java code to a useful programming language.

      Java is definitely a programming language but good luck actually getting it to compile on anyone else’s machine besides the person who wrote the project.

  • FoxBJK@midwest.social
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    9
    ·
    1 year ago

    Converting ancient code to a more modern language seems like a great use for AI, in all honesty. Not a lot of COBOL devs out there but once it’s Java the amount of coders available to fix/improve whatever ChatGPT spits out jumps exponentially!

    • gravitas_deficiency@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      37
      arrow-down
      2
      ·
      1 year ago

      The fact that you say that tells me that you don’t know very much about software engineering. This whole thing is a terrible idea, and has the potential to introduce tons of incredibly subtle bugs and security flaws. ML + LLM is not ready to be used for stuff like this at the moment in anything outside of an experimental context. Engineers are generally - and with very good reason - deeply wary of “too much magic” and this stuff falls squarely into that category.

      • FoxBJK@midwest.social
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        7
        ·
        1 year ago

        All of that is mentioned in the article. Given how much it cost last time a company tried to convert from COBOL, don’t be surprised when you see more businesses opt for this cheaper path. Even if it only converts half of the codebase, that’s still a huge improvement.

        Doing this manually is a tall order…

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          arrow-up
          11
          ·
          1 year ago

          And doing it manually is probably cheaper in the long run, especially considering that COBOL tends to power some very mission critical tasks, like financial systems.

          The process should be:

          1. set up a way to have part of your codebase in your new language
          2. write tests for the code you’re about to port
          3. port the code
          4. go to 2 until it’s done

          If you already have a robust test suite, step 2 becomes much easier.

          We’re doing this process on a simpler task of going from Flow (JavaScript with types) to TypeScript, but I did a larger transition from JavaScript to Go and Ruby to Python using the same strategy and I’ve seen lots of success stories with other changes (e.g. C to Rust).

          If AI is involved, I would personally use it only for step 2 because writing tests is tedious and usually pretty easy to review. However, I would never use it for both step 2 and 3 because of the risk of introducing subtle bugs. LLMs don’t understand the code, they merely spot patterns and that’s absolutely not what you want.

        • gravitas_deficiency@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          8
          ·
          edit-2
          1 year ago

          Yeah, I read the article.

          They’re MASSIVELY handwaving a lot of detail away. Moreover, they’re taking the “we’ll fix it in post” approach by suggesting “we can just run an armful of security analysis software on the code after the system spits something out”. While that’s a great sentiment, you (and everyone considering this approach) needs to consider that complex systems are pretty much NEVER perfect. There WILL be misses. Add this to the fact that a ton of organizations that still use COBOL are banks - which are generally considered fairly critical to the day-to-day operation of our society, and you can see why I am incredibly skeptical of this whole line of thinking.

          I’m sure the IBM engineers who made the thing are extremely good at what they do, but at the same time, I have a lot less faith in the organizations that will actually employ the system. In fact, I wouldn’t be terribly shocked to find that banks would assign an inappropriately junior engineer to the task - perhaps even an intern - because “it’s as simple as invoking a processing pipeline”. This puts a truly hilarious amount of trust into what’s effectively a black box.

          Additionally, for a good engineer, learning any given programming language isn’t actually that hard. And if these transition efforts are done in what I would consider to be the right way, you’d also have a team of engineers who know both the input and output languages such that they can go over (at the very, very least) critical and logically complex areas of the code to ensure accuracy. But since this is all about saving money, I’d bet that step simply won’t be done.

          • IHeartBadCode@kbin.social
            link
            fedilink
            arrow-up
            7
            ·
            1 year ago

            For those who have never worked on legacy systems. Any one who suggests “we’ll fix it in post” is asking you to do something that just CANNOT happen.

            The systems I code for, if something breaks, we’re going to court over it. Not, oh no let’s patch it real quick, it’s your ass is going to be cross examined on why the eff your system just wrote thousands of legal contracts that cannot be upheld as valid.

            Yeah, that fix it in post shit any article, especially this one that’s linked, suggests should be considered trash that has no remote idea how deep in shit one can be if you start getting wild hairs up your ass for changing out parts of a critical system.

            • gravitas_deficiency@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 year ago

              And that’s precisely the point I’m making. The systems we’re talking about here are almost exclusively banking systems. If you don’t think there will be so Fucking Huge Lawsuits over any and all serious bugs introduced by this - and there will be bugs introduced by this - you straight up do not understand what it’s like to develop software for mission-critical applications.

        • Kerfuffle@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          Even if it only converts half of the codebase, that’s still a huge improvement.

          The problem is it’ll convert 100% of the code base but (you hope) 50% of it will actually be correct. Which 50%? That’s left as an exercise to the reader. There’s no human, no plan, no logic necessarily to how it was converted also so it can be very difficult to understand code like that and you can’t ask the person who wrote why stuff is a certain way.

          Understanding large, complex codebases one didn’t write is a difficult task even under pretty ideal conditions.

          • PuppyOSAndCoffee@lemmy.ml
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            1 year ago

            First, odds are only half the code is used, and in that half, 20% has bugs that the system design obscures. It’s that 20% that tends to take the lionshare of modernization effort.

            It wasn’t a bug then, though it was there, but it is a bug now.

          • FoxBJK@midwest.social
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            1 year ago

            The problem is it’ll convert 100% of the code base

            Please go read the article. They specifically say they aren’t doing this.

            • Kerfuffle@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 year ago

              I was speaking generally. In other words, the LLM will convert 100% of what you tell it to but only part of the result will be correct. That’s the problem.

              • FoxBJK@midwest.social
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                1 year ago

                And in this case they’re not doing that:

                “IBM built the Code Assistant for IBM Z to be able to mix and match COBOL and Java services,” Puri said. “If the ‘understand’ and ‘refactor’ capabilities of the system recommend that a given sub-service of the application needs to stay in COBOL, it’ll be kept that way, and the other sub-services will be transformed into Java.”

                So you might feed it your COBOL code and find it only coverts 40%.

                • Kerfuffle@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  1 year ago

                  So you might feed it your COBOL code and find it only coverts 40%.

                  I’m afraid you’re completely missing my point.

                  The system gives you a recommendation: that has a 50% chance of being correct.

                  Let’s say the system recommends converting 40% of the code base.

                  The system converts 40% of the code base. 50% of the converted result is correct.

                  50% is a random number picked out of thin air. The point is that what you end up with has a good chance of being incorrect and all the problems I mentioned originally apply.

    • HellAwaits@lemm.ee
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      1 year ago

      Is ChatGPT magic to people? ChatGPT should never be used in this way because the potential of critical errors is astronomically high. IBM doesn’t know what it’s doing.

    • socsa@lemmy.ml
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      I’m more alarmed at the conversation in this thread about migrating these cobol apps to java. Maybe I am the one who is out of touch, but what the actual fuck? Is it just because of the large java hiring pool? If you are effectively starting from scratch why in the ever loving fuck would you pick java?

    • LeylaLove [she/her, love/loves]@hexbear.net
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      This is what in thinking. Even the few people I know IRL that know COBOL from their starting days say it’s a giant pain in the ass as a language. It’s not like it’s really gonna cost all that much time to do compared to paying labor to rewrite it from the base, even if they don’t end up using it. Sure, correcting bad code can take a lot of time to do manually. But important code being in COBOL is a ticking time bomb, they gotta do something.

        • FaceDeer@kbin.social
          link
          fedilink
          arrow-up
          2
          arrow-down
          3
          ·
          edit-2
          1 year ago

          Counter counterpoint: The longer you let it sit the more obsolete the language becomes and the harder it becomes to fix it when something does break.

          This is essentially preventative maintenance.

          • gravitas_deficiency@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 year ago

            Counter^3 point: a system that was thoroughly engineered and tested a long time ago, and that still fulfills all the technical requirements that the system must meet will simply not spontaneously break.

            Analogously: this would be like using an ML + LLM to rewrite the entire Linux kernel in Rust. While an (arguably) admirable goal, doing that in one fell swoop would be categorically rejected by the Linux community, to the extent that if some group of people somehow unilaterally just merged that work, the rest of the Linux kernel dev community would almost certainly trigger a fork of the entire kernel, with the vast majority of the community using the forked version as the new source of truth.

            This is not preventative maintenance. This is fixing something that’s not broken, that has moreover worked reliably, performantly (enough), and correctly for literal decades. You do not let a black box rewrite your whole codebase in another language and then expect everything to magically work.

  • argv_minus_one@beehaw.org
    link
    fedilink
    arrow-up
    22
    ·
    1 year ago

    If even highly skilled humans couldn’t do that, artificial pseudointelligence doesn’t stand a chance in hell.

    There’s nothing of substance here. Just suits chasing buzzwords. Nothing will actually happen, just like nothing actually happened every other time some fancy new programming language or methodology came along and tried to replace COBOL, including Java.

    • duncesplayed@lemmy.one
      link
      fedilink
      English
      arrow-up
      27
      ·
      1 year ago

      This is what I don’t get. Rewriting COBOL code into Java code is dead easy. You could teach a junior dev COBOL (assuming this hasn’t been banned under the Geneva Convention yet) and have them spitting out Java code in weeks for a lot cheaper.

      The problem isn’t converting COBOL code to Java code. The problem is converting COBOL code to Java code so that it cannot ever possibly have even the most minute difference or bug under any possible circumstances ever. Even the tiniest tiniest little “oh well that’s just a silly little thing” bug could cost billions of dollars in the financial world. That’s why you need to pay COBOL experts millions of dollars to manage your COBOL code.

      I don’t understand what person looked at this problem and said “You know what never does anything wrong or makes any mistake ever? Generative AI”

      • PuppyOSAndCoffee@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Ooh good point

        What if IBM had a product that did the COBOL->Java conversion (no what if tbh, believe it exists), then just changed the marketing material to make it seem flashy?

        So like, you think it’s Ai but really it’s the same grammar translation functions that have been around for ever.

  • Treczoks@lemm.ee
    link
    fedilink
    arrow-up
    21
    ·
    1 year ago

    “all those COBOL developer jobs” nowadays probably fit in one bus. That’s why every company that can afford it moves away from COBOL.

  • ArbitraryValue@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    19
    ·
    edit-2
    1 year ago

    according to a 2022 survey, there’s over 800 billion lines of COBOL in use on production systems, up from an estimated 220 billion in 2017

    That doesn’t sound right at all. How could the amount of COBOL code in use quadruple at a time when everyone is trying to phase it out?

    • eyy@lemm.ee
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      That doesn’t sound right at all. How could the amount of COBOL code in use quadruple at a time when everyone is trying to phase it out?

      Because why they’re trying, they need to keep adding business logic to it constantly. Spaghetti code on top of spaghetti code.

    • kitonthenet@kbin.social
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      It could mean anything, the same code used in production in new ways, slightly modified code, newly discovered cobol where the original language was a mystery, new requirements for old systems, seriously it could be too many things for that to be a useful metric with no context

    • RickyRigatoni@lemmy.ml
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      Maybe some production systems were replicated at some point and they’re adding those as unique lines?

    • BudgieMania@kbin.social
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      1 year ago

      trying

      That’s the keyword right there. Everyone wants to phase mainframe shenanigans out until they get told about the investments necessary to do it, then they are happy to just survive with it.

      I’m currently at a company that’s actually trying it and it’s being a pain

  • Aurenkin@sh.itjust.works
    link
    fedilink
    arrow-up
    16
    arrow-down
    1
    ·
    edit-2
    1 year ago

    ChatGPT did an amazing job converting my Neovim config from VimScript to Lua including explaining each part and how it was different. That was a very well scoped piece of code though. I’d be interested to see how an LLM goes on large projects as I imagine that would be a whole different level of complexity. You need to understand a lot more about the components and interactions and be very careful not to change behaviour. Security is another important thing that was already mentioned in this thread and the article itself.

    I put my self as doubtful but really interested to see the results nonetheless. I’ve already been surprised a few times over by these things so who knows.

  • Pavlichenko_Fan_Club [comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    15
    ·
    1 year ago

    Oh FFS there is nothing magical about COBOL like its some kind of sword in the stone which only a chosen few can draw. COBOL is simple(-ish), COBOL is verbose. That’s why there is so much of it.

    The reason you don’t see new developers flocking to these mythical high-paying COBOL jobs is its not about the language, but rather about maintaining these gianourmous, mission-critical applications that are basically black boxes due to the loss of institutional knowledge. Very high risk with almost no tangible, immediate reward–so don’t touch it. Not something you can just throw a new developer at and hope for the best, the only person who knew this stuff was some guy named “John”, and he retired 15 years ago! Etc, etc.

    Also this is IBM were talking about, so purely buzzword-driven development. IBM isn’t exactly known for pushing the envelope recently. Plus transpilers have existed as a concept since… Forever basically? Doubt anything more will come from this other than upselling existing IBM contracts who are already replacing COBOL.

    • quicken@aussie.zone
      link
      fedilink
      arrow-up
      7
      ·
      1 year ago

      Because IBM doesn’t want to tie themselves to Google or Microsoft. They already have their own builds of OpenJDK.

    • loutr@sh.itjust.works
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      Because Cobol is mainly used in an enterprise environment, where they most likely already run Java software which interfaces with the old Cobol software. Plus modern Java is a pretty good language, it’s not 2005 anymore.

  • Zeth0s@lemmy.world
    link
    fedilink
    arrow-up
    8
    arrow-down
    2
    ·
    1 year ago

    Of all modern languages, why java? Which will likely soon become legacy for backend applications

    • datendefekt@lemmy.ml
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      Sadly, I’ve haven’t been programming for a while, but I did program Java. Why do you consider it legacy and do you see a specific language replacing it?

        • datendefekt@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          I was pretty impressed by what I saw from Kotlin. Pragmatic and terse, not as academic as Java. Reminds me of the shift away from EJB to Spring. Have been reading up on Rust and thought that with the LVM and WebAssembly (also for the backend), it is perfectly positioned as an alternative. What do you think?