Got myself a few months ago into the optimization rabbit hole as I had a slow quant finance library to take care of, and for now my most successful optimizations are using local memory allocators (see my C++ post, I also played with mimalloc which helped but custom local memory allocators are even better) and rethinking class layouts in a more “data-oriented” way (mostly going from array-of-structs to struct-of-arrays layouts whenever it’s more advantageous to do so, see for example this talk).

What are some of your preferred optimizations that yielded sizeable gains in speed and/or memory usage? I realize that many optimizations aren’t necessarily specific to any given language so I’m asking in [email protected].

  • hperrin@lemmy.world
    link
    fedilink
    arrow-up
    43
    ·
    1 year ago

    I don’t know if this counts, but making sure not a single query in my app results in a full table scan in MySQL made a huge difference.

  • Alex@programming.dev
    link
    fedilink
    arrow-up
    33
    ·
    1 year ago

    More than ten years ago, I was a team-lead in a game development company and during another crunch I was very overloaded with tasks, including a code review. And of course, I missed some things. We optimized the game, which in general did not do 30 fps on the target machine with minimum requirements, and the target was at least 40 fps. During the month, we jointly optimized everything so that we were able to achieve almost stable 40 frames, but this was not enough and periodically there were drawdowns up to 15-20 fps. Everyone is in panic, there are no ideas, we’re optimized everything.

    No, I understand that there is no optimization limit apriori, but I mean an adequate opts without rewriting everything to assembler specifically for all supported architectures.

    So, one night I looked into the event loop initialization and found that the initialization happens twice. Twice. Two parallel event loops. That is, two parallel cycles of logic and state updates. That night, by deleting one line, I optimized the performance of the game by more than 100%. 🤦🏻‍♂️

    I investigated and found out that this line of secondary initialization was left by junior “for debugging” and forgotten in the crunch. And I missed it on the review.

    I’ve keep this story a secret all these years. And now I’m not revealing names. Otherwise, it can have a dramatic impact on the careers of many.

  • ShadowCatEXE@lemmy.world
    link
    fedilink
    arrow-up
    24
    ·
    1 year ago

    Staying on the SQL theme… The company I work for has a fairly old (~20 years) system. There’s a feature for users and site admins to export massive amounts of data, with the option to export data from when the system was first released. Purely CSV or XML data formats. On large datasets, the time for export would vary from 10-20+ hours, and would frequently timeout, forcing you to split exports into multiple timeframes and manually merging them into a single file. The solution? Indexes! Indexes were non-existent. After adding them, export times have dropped to ~10-15 minutes, which is a rather insane performance increase, especially since a single export is accepted per account at a time.

  • Die4Ever@programming.dev
    link
    fedilink
    arrow-up
    23
    ·
    edit-2
    1 year ago

    ok maybe not my biggest or cleanest optimization, but an interesting one

    I made the WCS Predictor for StarCraft 2 eSports, which would simulate the whole year of tournaments to figure out the chances for each player qualifying for the world championship (the top 16 players by WCS Points), and it would also show the events that would help/hurt the players. It was a big monte carlo simulation that would run millions of iterations. (My memory is fuzzy here because this was 2013 to 2014)

    Originally I did a data-based approach where each Tournament object had a vector of Round objects (like round of 32, or semifinals, etc) which had a vector of Match objects (like Soulkey vs Innovation, or a 4 player group). The Round class had a property for best_of (best of 3, best of 5, etc). This was really inflexible cause I couldn’t properly simulate complex tournament formats that didn’t fit my data, and it required a lot of if statements that could cause branch prediction misses.

    A tournament definition looked something like this:

    older code example
    		Round ro32("ro32");
    		ro32.best_of=3;
    		ro32.points_for_3rd=150+25;
    		ro32.points_for_4th=100+25;
    		Match ro32GroupA;
    
    		ro32.matches.push_back(ro32GroupA);
    		ro32.matches.push_back(ro32GroupB);
    		ro32.matches.push_back(ro32GroupC);
    		ro32.matches.push_back(ro32GroupD);
    		ro32.matches.push_back(ro32GroupE);
    		ro32.matches.push_back(ro32GroupF);
    		ro32.matches.push_back(ro32GroupG);
    		ro32.matches.push_back(ro32GroupH);
    
    		GSL.rounds.push_back( ro32 );
    

    When I rewrote it for the following year, I ditched the data-driven approach to try to get more efficiency and flexibility, but I definitely didn’t want virtual functions either, so I decided to use templates instead and a custom class for each tournament. Now creating a tournament looked like this:

    newer code example
    
    template
    class Round : public RoundBase
    {
    public:
    	MatchType matches[num_matches];
    // ... more stuff ...
    };
    
    class CodeSBase : public TournamentBase
    {
    public:
    	Round<32, SwissGroup, 8, RandomAdvancement<16, 16>, true > ro32;
    	Round<16, SwissGroup, 4, A1vsB2<8, 8>, true > ro16;
    	Round<8, SingleMatch, 4, StraightAdvancement<4, 4>, true > quarterfinals;
    	Round<4, SingleMatch, 2, StraightAdvancement<2, 2>, true > semifinals;
    	Round<2, SingleMatch, 1, StraightAdvancement<1, 1>, true > finals;
    
    	void init(vector &prev_matches, vector &upcoming_matches)
    	{
    		quarterfinals.best_of = 5;
    		semifinals.best_of = 7;
    		finals.best_of = 7;
    		ro32.points_for_placing[3] = 100;
    		ro32.points_for_placing[2] = 150;
    		ro16.points_for_placing[3] = 300;
    		ro16.points_for_placing[2] = 400;
    		quarterfinals.points_for_placing[1] = 600;
    		semifinals.points_for_placing[1] = 900;
    		finals.points_for_placing[1] = 1250;
    		finals.points_for_placing[0] = 2000;
    		finals.match_placing_to_tournament_placing[0] = 1;
    	}
    
    
    	void predict(Simulation &sim, array &top8, array &bottom24, array &finalists, Rand64 &rng)
    	{
    		RandomAdvancement<16, 16> Ro32toRo16;
    		A1vsB2<8, 8> Ro16toRo8;
    		StraightAdvancement<4, 4> Ro8toRo4;
    		StraightAdvancement<2, 2> Ro4toRo2;
    		StraightAdvancement<1, 1> finalsadv;
    
    		ro32.predict(sim, t_id, Ro32toRo16, rng);
    		ro16.AcceptAdvancements(Ro32toRo16.advancing_players);
    		ro16.predict(sim, t_id, Ro16toRo8, rng);
    		quarterfinals.AcceptAdvancements(Ro16toRo8.advancing_players);
    		quarterfinals.predict(sim, t_id, Ro8toRo4, rng);
    		semifinals.AcceptAdvancements(Ro8toRo4.advancing_players);
    		semifinals.predict(sim, t_id, Ro4toRo2, rng);
    		finals.AcceptAdvancements(Ro4toRo2.advancing_players);
    		finals.predict(sim, t_id, finalsadv, rng);
    
    		top8 = Ro16toRo8.advancing_players;
    		for (uint i = 0; i<8; i++) {
    			bottom24[i] = Ro32toRo16.falling_players[i * 2];
    			bottom24[i + 8] = Ro32toRo16.falling_players[i * 2 + 1];
    			bottom24[i + 16] = Ro16toRo8.falling_players[i];
    		}
    		finalists[0] = finalsadv.advancing_players[0];
    		finalists[1] = finalsadv.falling_players[0];
    	}
    

    All data had very good locality using arrays instead of vectors, everything could be inlined and branch predicted and prefecthed, complex tournament formats could be built even properly handling high placements from one tournament granting you seeding into a different tournament. I think I gained like 3x to 5x speedboost from this alone, I also made it multithreaded and work in batches which improved performance further, which allowed me to get updated results more quickly and with a higher number of samples. I also made it so it could output the results and then keep processing more batches of samples to refine the numbers from there. I wouldn’t normally suggest these kinds of optimizations but it’s a very unusual program to have such a wide hotloop

    The website is gone, but here’s a working archive of it (which I didn’t know existed until writing this post, I thought there were only broken archives)

    archive links, me reminiscing on old times, get ready for stats and graphs overload

    home page: https://web.archive.org/web/20150822091605/http://sc2.4ever.tv:80/

    a tournament page: https://web.archive.org/web/20160323082041/http://sc2.4ever.tv/?page=tournament&tid=27

    a player’s page: https://web.archive.org/web/20161129233856/http://sc2.4ever.tv/?pid=73

    a general checkup page: https://web.archive.org/web/20161130055346/http://sc2.4ever.tv/?page=checkup

    a page for players who must win tournaments to qualify: https://web.archive.org/web/20161129235740/http://sc2.4ever.tv/?page=must_wins

    page showing the simulations history: https://web.archive.org/web/20161130055230/http://sc2.4ever.tv/?page=simulations

    FAQ page: https://web.archive.org/web/20160323073344/http://sc2.4ever.tv/?page=faq

    the fantasy league WCS Wars: https://web.archive.org/web/20161130055233/http://sc2.4ever.tv/?page=gamehome

    my WCS Wars user page https://web.archive.org/web/20160704040152/http://sc2.4ever.tv/?page=user&uid=1

  • Mikina@programming.dev
    link
    fedilink
    arrow-up
    18
    ·
    1 year ago

    I was working on a pretty well known game, porting it to consoles.

    On PS4 we started getting OOM crashes after you’ve played a few levels, because PS4 doesn’t have that much memory. I was mostly new on the project and didn’t know it very well, so I started profiling.

    It turned out that all the levels are saved in a pretty descriptive JSON files. And all of them are in Unity’s Scriptable Objects, so even if you are not playing that level, they all get loaded into memory, since once something references a SO, it gets loaded immediately. It was 1.7Gb of JSON strings loaded into memory once the game started, that stays there for the whole gameplay.

    I wrote a build script that compresses the JSON strings using gzip, and then uncompresses it when loading the actual level.

    It reduced the memory of all the levels to 46Mb down from 1.7Gb, while also reduced the game load by around 5 seconds.

  • docktordreh@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    17
    ·
    1 year ago

    A database optimization I made was changing a tables id generation from a manual generation scheme (some other table had an entry with the next usable id, it was updated with every entry written) to a uuid generation scheme. The table stores data from a daily import, on a fresh import all previous data is deleted. On some systems, there are more than 10 000 000 entries to be imported on a daily basis, which took 8 hours. Now, with batched inserts and the mentioned improvement in the db scheme, it’s at about 20 minutes.

    TLDR: Reducing the amount of queries sent is good, because although network is usually fast (ms), db requests are still slow compared to the speed of an application (clock cycles).

    And yeah, there is an option to only import changes daily, but sadly that isn’t supported in every environment.

  • lysdexic@programming.dev
    link
    fedilink
    English
    arrow-up
    15
    ·
    1 year ago

    The optimization I’m the most proud about was when I worked on a legacy project whose end-to-end builds took around 1 hour. After spending some time working on its architecture, project layout and build system, I managed to get the full end-to-end builds to take 10 minutes, and incremental builds to be almost instant.

    What makes me the most proud about this project is that the technical debt plaguing the legacy project was directly and indirectly the reason why half a dozen of my team members burned out and quit the company. After that point my remaining team members started to be far less stressed and team velocity skyrocketed, just for the fact that the thought of iterating over a bugfix and posting a pull request didn’t cost at least one hour, and sometimes two or three.

      • lysdexic@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Surely you got a bonus and a raise out of it right? Right??

        The only reward I got from it was recognition from my team members, which was already more than what I was expecting to get.

        My manager was praised for the higher team velocity and improvements in the team’s burndown chart. The hallmark of having done good work is seeing others trying to take credit for it.

  • oldfart@lemm.ee
    link
    fedilink
    arrow-up
    11
    ·
    1 year ago

    I patched someone else’s program which was known for being slower when it’s used for a longer time. It was iterating over items in its window just to reach the last element, the more items the slower, it became snappy when i taught it to keep a pointer to the last element.

    Not mind blowing, I know, but it was a popular program and this made life better for many people.

  • Tom@programming.dev
    link
    fedilink
    arrow-up
    10
    ·
    edit-2
    1 year ago

    I’ve got so many more stories about bad optimizations. I guess I’ll pick one of those.

    There was an infamous (and critical) internal application somewhere I used to work. It took in a ton of data, putting it in the database, and then running a ton of updates to populate various fields and states. It was something like,

    • Put all data in x table with batch y.
    • Update rows in batch y with condition a, set as type a. (just using letters as placeholders for real states)
    • Update rows in batch y that haven’t been updated and have condition b, set as type b.
    • Update rows in batch y that haven’t been updated and have condition c, set as type c.
    • Update rows in batch y that have condition b and c and condition d, set as type d.
    • (Repeat many, many times)

    It was an unreadable mess. Trying to debug it was awful. Business rules encoded as a chain of sql updates are incredibly hard to reason about. Like, how did this row end up with that data??

    Me and a coworker eventually inherited the mess. Once we deciphered exactly what the rules were and realized they weren’t actually that complicated, we changed the architecture to:

    • Pull data row by row (instead of immediately into a database)
    • Hydrate the data into a model
    • Set up and work with the model based on the business rules we painstakingly reverse engineered (i.e. this row is type b because conditions x,y,z)
    • Insert models to database in batches

    I don’t remember the exact performance impact, but it wasn’t markedly faster or slower than the previous “fast” SQL-based approach. We found and fixed numerous bugs, and when new issues came up, issues could be fixed in hours rather than days/weeks.

    A few words of caution: Don’t assume that building things with a certain tech or architecture will absolutely be “too slow”. Always favor building things in a way that can be understood. Jumping to the wrong tool “because it’s fast” is a terrible idea.

    Edit: fixed formatting on Sync

    • halloween_spookster@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      1 year ago

      If you think it’s slow, first prove it with a benchmark

      So many crimes against maintainability are committed in the name of performance. Optimisation tears down abstractions, exposes internals, and couples tightly. If you’re choosing to shoulder that cost, ensure it is done for good reason.

      • Tom@programming.dev
        link
        fedilink
        arrow-up
        5
        ·
        1 year ago

        Yep, absolutely.

        In another project, I had some throwaway code, where I used a naive approach that was easy to understand/validate. I assumed I would need to replace it once we made sure it was right because it would be too slow.

        Turns out it wasn’t a bottleneck at all. It was my first time using Java streams with relatively large volumes of data (~10k items) and it turned out they were damn fast in this case. I probably could have optimized it to be faster, but for their simplicity and speed, I ended up using them everywhere in that project.

  • Kissaki@feddit.de
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 year ago

    As a more fun or absurd anecdote, I reviewed a change in a SQL server stored procedure that merely copied the parameter value into a variable, solving a significant performance issue.

    The query optimizer on first invocation determines the optimal query execution and resolution approach. For a stored procedure with parameters, parameter values can change what the best approach is.

    In this case, the parameter could hold a record ID or null for all records. Because it included sub queries, whether it’s one or the other makes a huge difference on the query execution approach.

    Using an intermediate variable prevents it of taking and remembering the optimization path for the other case.

  • 80avin@programming.dev
    link
    fedilink
    arrow-up
    8
    ·
    1 year ago

    Don’t have an objective definition of “Best” but here are few over the top of my head.

    1. Profiled backend’s startup and reduced startup time by ~20-30 times. Issue was we were traversing node_modules folder to scan for API documentation.
    2. Profiled python using cProfiler & snakeviz and optimized dozens of regex calls by simple pre-filtering. Improved performance by ~20 times.
    3. Profiled python using pprofile & qcalgrind (IIRC) to pinpoint a single heavy regex compilation and optimized it by pre-filtering. Improved performance 8-10 times.
    4. Use nodejs streams to do load data from s3, parse as CSV, transform into different structure and save into s3 in a continuous stream. Didn’t measure the performance gains but should be several times atleast. 5… many more, some coming from embedded world and even more drastic & unexpected
  • halloween_spookster@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    I just recently updated a database call to return the last 7 days of data instead of all data from all time. It went from taking 30+seconds to around 2-4 seconds. Still a long way to go to make it “good” but at least it’s not timing out now.

  • aluminium@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    1 year ago

    This isn’t any great C++ low level optimization but I have an Angular Project which contains a list that shows the live logs of various backend services and other things comming in through websockets (I keep up to 500K lines in memory). Essentially a extremly long scrollable list. First I obviously used lazy list which only renders the elements in view that are visibile instead of one that renders everything all the time. Secondly I removed as many html elements from each list element as possible, mostly divs that were just used to apply a css class.

    When it comes to JS code I got a huge performance improvement by replacing a bunch of pure array.map, array.filter, array.toSorted, … method calls with a big ol for loop. These pure functions caused an insane memory churn because they create a shallow copy each time and doing that for an array with 500K entries caused the garbage collector to work overtime. Also the whole filtering and searching in the first draft was done every time new elements were added to the list via websocket for the whole entire list. I changed that so it only does it for new elements.

    Overall this allowed me to increase the number of logs kept in memory from about 20K (because then performance would start to dip) to now 500K.

  • agilob@programming.dev
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    1 year ago

    Noticed that Hibernate session (DB ORM session) was leaking to Jackson (JSON marshalling), potentially causing infinite n+1 problem. Changing a few lines of code to lazy loading and fixing the session leak reduced our daily data transfer from DB from 5.6Gb to 170Mb.

    Not sure if this was the biggest optimisation, but definitely the dumbest issue.