• 0 Posts
  • 52 Comments
Joined 2 years ago
cake
Cake day: June 2nd, 2023

help-circle


  • I don’t think either of these models can work. Fund-to-release is basically the same as crowdfunding, except painted as more focused. Now users, rather than “donate 5$ every month to fooProject” need to deal with a constant stream of “donate X$ to fooProject for Y feature / bug”, so it sharply increases subscription fatigue. I guess it wouldn’t be subscription fatigue then. Shopping fatigue?

    And what if bugfix X reaches 90% of the funding, and feature Y reaches 90% of the funding, but neither reaches 100%? With a simpler subscription the project would have a set amount of money to distribute across its internal needs.

    And this isn’t even touching on the subject of cost overruns. What happens when your feature estimated at 2 months of dev time is 60% done after 7 weeks? Do you ask for a second donation round?

    Rather, for this kind of focused work a project should keep one single treasury to distribute as needed, and have polls for contributors (monetary or otherwise) to vote on which parts to focus on first.

    The fund-to-release model shifts all the risk on the authors, who don’t see any monetary reward while the work is ongoing and are not guaranteed any even when the work is finished. Dev work needs a constant stream of funding (to eat, pay rent etc) unless the author starts with a sizeable initial treasury, in which case they can deal with big lump sumps to distribute as they need. But this requires at the very least the guarantee of payment once the work is done which, again, this model does not guarantee.

    Sorry I don’t have solutions to propose, but I think the flaws of these alternatives far outweigh their pros.




  • Why are you using networkd instead of networkmanager on a desktop?

    What a weird question. Networkd works anywhere systemd works, why whould desktops be any different.

    It’s the same as asking someone “why are you using systemd-boot instead of grub?” Because I like systemd boot better and it’s easier to configure. Same with networkd, configuration is stupid simple, I have installed it on my work machine even.

    As for op: since you can manually ping ip addresses and the issue seems to be time-based, could it be that your machine is somehow not renegotiating a dhcp lease?


  • Don’t know why you are getting downvoted, it’s absolutely true. Raw specs these days mean relatively little. With smart frequency boosts that vary with thermals, CPU and GPU on the same package, different workloads stressing different components differently, RAM bandwidth playing different roles for CPU and GPU applications, and many other factors, just stating that the M4 has so and so many cores is practically useless.

    The only real way to gauge performance differences is via benchmarks and measuring sustained workloads.





  • Yeah that’s what we did last time. I implemented a basic framework on top of a very widespread system in our codebase, which would allow a number of requested minor features to be implemented similarly, with the minimal amount of required boilerplate, and leaving the bulk of the work to implementing the actual meat of the requests.

    These requests were completely independent and so could be parallelized easily. The “framework” I implemented was also incredibly thin (basically just a helper function and an human instruction in the shape of “do this for this usecase”) over a system that is preexisting knowledge. My expectation was to have to bring someone up to speed on certain things and then let them loose on this collection of tasks, maybe having to answer some question a couple times a day.

    Instead, since the assigned colleague is basically just a copilot frontend, I had to spend 80% or more of my days explaining exactly what needed to be done (I would always start with the whys od things since the whats are derived from them, but this particular colleague seems uninterested in that).

    So I was basically spending my time programming a set of features by proxy, while I was ostensibly working on a different set of features.

    So yeah, splitting work only works if you also have people capable of doing it in the first place. Of course I couldn’t not help this colleague either, that’s a bad mark on performance review you know. Even when the colleagues have no intention of learning or being productive in any way (I live in a country with strong employee regulations so almost nobody can be fired for anything concerning actual work performance, and this particular colleague doesn’t hide that they don’t care about actually doing a good job, except to managers so they still get pay raises for “improving”).

    Yeah, you can tell I’m unhappy


  • who is actually stopping them from dealing with it?

    Management. Someone in management sets idiotic deadlines, then someone tells you “do X”, you estimate and come up with “it will take T amount of time” and production simply tells you “that’s too long, do it faster”

    they don’t care about the details or maintenance

    They don’t, they care about time. If there are 6 weeks to implement a feature that requires reworking half the product, they don’t care to know half the product needs to be reworked. They only care to hear you say that you’ll get it done in 6 weeks. And if you say that’s impossible, they tell you to do it anyway

    you have to include the cost of managing technical debt

    I do, and when I get asked why my time estimations are so long compared to those of other colleagues I say I include known costs that are required to develop the feature, as well as a buffer for known unknowns and unknown unknowns which, historically, has been necessary 100% of the time and never included causing us development difficulties and us running over cost and over time causing delays and quality issues that caused internal unhappiness, sometimes mandatory overtime, and usually a crappy product that the customers are unhappy with. That’s me doing a good job right? Except I got told to ignore all of that and only include the minimum time to get all of the dozens of tiny pieces working. We went over time, over cost, and each tiny piece “works” when taken in isolation but doesn’t really mix with everything else because there was no integration time and so each feature kinda just exists there on its own.

    Then we do retrospectives in which we highlight all the process mistakes that we ran into only to do them all again next time. And I get blamed come performance review time because I was stressed and I wasn’t at the top of my game in the last year due to being chronically overburdened, overworked, and underpaid.






  • It’s one thing to pay, and another to be squeezed dry.

    When ads were mostly static banners on websites almost nobody was blocking them, because they were mostly unobtrusive.

    However, they would often link to shady websites that would install random crap, so the usecase for blocking them was already there.

    Then they became animated, and they multiplied. It was one at the bottom of content at first. Then a couple. Then two vertical banners on the sides too. Then more rectangular banners here and there for good measure.

    Then they became unkillable javascript popups, then proper new browser windows. Then autoplaying videos with audio were added. And this is just the visible stuff. Add tracking pixels, tracking cookies, browser fingerprinting, and tons of other spying technology deployed under the guise of “but the content is free”.

    After every step the use of ad and tracking blockers became more legitimate as serving ads moved further and further away from paying for free content and squarely in the space of selling user data collected without consent for huge profit margins.

    If ads and subscriptions were enough to just make a normal amount of profit, very few would be blocking ads or pirating content, because the amount of ads or the price of subscriptions would be reasonable and affordable.

    But since everyone wants to make a 1000% markup on the content they generate, they will drive their very own paying customers away.

    Youtube could have served me a couple ads per video and I would have kept using it forever. Instead they served me a minimum of 20 ads per video, so now they will serve me zero, forever.

    Netflix could have gotten 12 euros every month out of me for their dwindling and dwindling content selection. Instead they wanted 14 after a while. And 17 after a while. And 19 after a little while more. All the while refusing to serve me the 4k content I paid for.

    So instead they now get zero too.

    I am very happy to pay for content, and a lot of people like me. But the comment you originally replied to was in reference to youtube increasing the price of their subscription by ludicrous amounts. You replied there content isn’t free, and I replied that youtube has no problem making money. The increases are not to keep youtube afloat, is to make youtube make 10 billions in profit rather than 8 next year.

    It’s not about paying a fair amount of money for content, it’s about making you pay all that you can give and suck you dry.

    So to your question “how do you pay for content/services in general?” I answer “with money”, but that is not what is happening here.