I appreciate the effort, but I was not critiquing your reading. Moreso that I took it differently. That’s just a misread on my part and my point was not about general investing as a proxy for progress/a driver.
I appreciate the effort, but I was not critiquing your reading. Moreso that I took it differently. That’s just a misread on my part and my point was not about general investing as a proxy for progress/a driver.
Perhaps we’re talking to different points. Parent comment said that investors are always looking for better and better returns. You said that’s how progress works. This sentiment is was my quibble.
I took the “investors are always looking for better returns” to mean “unethically so” and was more talking about what happens long term. Reading your above I think you might have been talking about good faith.
In a sound system that’s how things work, sure! The company gets investment into tech and continue to improve and the investors get to enjoy the progress’s returns.
You’re conflating creating dollar value with progress. Yes the technology moves the total net productivity of humankind forward.
Investing exists because we want to incentive that. Currently you and the thread above are describing bad actors coming in, seeing this small single digit productivity increase and misrepresenting it so that other investors buy in. Then dipping and causing the bubble to burst.
Something isn’t a ‘good’ investment just because it makes you 600% return. I could go rob someone if I wanted that return. Hell even if then killed that person by accident the net negative to human productivity would be less.
These bubbles unsettle homes, jobs, markets, and educations. Inefficiency that makes money for anyone in the stock market should have been crushed out.
I don’t disagree with anything you said but wanted to just weigh in on the more degrees of freedom.
One major thing to consider is that unless we have 24/7 sensor recording with AI out in the real world and a continuous monitoring of sensor/equipment health, we’re not going to have the “real” data that the AI triggered on.
Version and model updates will also likely continue to cause drift unless managed through some sort of central distribution service.
Any large Corp will have this organization and review or are in the process of figuring it out. Small NFT/Crypto bros that jump to AI will not.
IMO the space will either head towards larger AI ensembles that tries to understand where an exact rubric is applied vs more AGI human reasoning. Or we’ll have to rethink the nuances of our train test and how humans use language to interact with others vs understand the world (we all speak the same language as someone else but there’s still a ton of inefficiency)
I agree, however as much as I wish our governments would do both - they won’t. At least not This is why I said we should be playing money ball. I don’t disagree with anything you said.
I think the additionallity to the grid as these renewables come online is great…but if they only cover the energy to run them then they’re not expanding the grid for everyone else. This emissions continue. I agree it incentvizes renewable builds but only if it powers more of the grid vs just being dedicated to the wells.
We’re headed towards a world where corps are incentvizes to buy up all the clean energy on the market and leave consumers with the fossile fuels right now. We just don’t have enough clean or renewable energy to power everything and demand is only increasing.
Agreed! I was just mentioning the only negative angle I could see, still a net positive!
We are not beyond the emissions reduction stage and will not be until the grid is 100% renewable or other emissions free energy powered.
Switching to clean energy is emissions reduction. Imo should be our #1 priority because we’re not reducing power demand without massive societal change.
Amen, only angle I can see someone disagreeing with is trees becoming a potential bank of carbon to be fed back into the atmosphere via fuel for wildfires.
I so wish there were better ways to control forest fires.
From an industry standpoint everything the article says at the end as a critique is correct. We should be playing moneyball, those fans that draw in the particles would be an additional toll on the power grid.
Instead spend the money on removing the emission sources and modernizing our grid/reducing fuel emissions. After weve exhausted low hanging fruit there we’ll have to throw money at offset tech.
I suppose we’ll have to get the tech made eventually but there’s just so much to be reworked on our grids as is.
12 yr club here 👍
These all align with my understanding! Only thing I’d mention is that when I said “we’ve not had llms available” I meant “LLMs this powerful ready for public usage”. My b
I haven’t been in decision analytics for a while (and people smarter than I are working on the problem) but I meant more along the lines of the “model collapse” issue. Just because a human gives a thumbs up or down doesn’t make it human written training data to be fed back. Eventually the stuff it outputs becomes “most likely prompt response that this user will thumbs up and accept”. (Note: I’m assuming the thumbs up or down have been pulled back into model feedback).
Per my understanding that’s not going to remove the core issue which is this:
Any sort of AI detection arms race is doomed. There is ALWAYS new ‘real’ video for training and even if GANs are a bit outmoded, the core concept of using synthetically generated content to train is a hot thing right now. Technically whomever creates a fake video(s) to train would have a bigger training set than the checkers.
Since we see model collapse when we feed too much of this back to the model we’re in a bit of an odd place.
We’ve not even had a LLM available for the entire year but we’re already having trouble distinguishing.
Making waffles so I only did a light google but I don’t really think chatgpt is leveraging GANs for it’s main algos, simply that the GAN concept could be applied easily to LLM text to further make delineation hard.
We’re probably going to need a lot more tests and interviews on critical reasoning and logic skills. Which is probably how it should have been but it’ll be weird as that happens.
sorry if grammar is fuckt - waffles
Predictable issue if you knew the fundamental technology that goes into these models. Hell it should have been obvious it was headed this way to the layperson once they saw the videos and heard the audio.
We’re less sensitive to patterns in massive data, the point at which we cant tell fact from ai fiction from the content is before these machines can’t tell. Good luck with the FB aunt’s.
GANs final goal is to develop content that is indistinguishable… Are we surprised?
Edit since the person below me made a great point. GANs may be limited but there’s nothing that says you can’t setup a generator and detector llm with the distinct intent to make detectors and generators for the sole purpose of improving the generator.
It’d be an uphill battle but if someone got into programming via free online courses they could build a resume via collaborating with projects on github. It’d be a way to prove skill without the diploma.
Advice goes the same for anything where you can build a portfolio to demonstrate competency, most people in industries just care about results. This could be photography, graphic design, a physical labor like wood working etc.
Sucks because you’d have to outlay time upfront before maybe getting payed though. Ymmv
Yeah, I appreciate the nuance too! It’s just I don’t have anything to really add as I’m the one who misread!