Hacker News new | past | comments | ask | show | jobs | submit | cubefox's comments login

A few years ago saying that there will be soon AI that perfectly understands human natural language was viewed as unrealistic science fiction. Today the possibility of an AI god is regarded as unrealistic science fiction.

I keep watching old Star Trek TNG episodes and the way Data and the ship’s computer is portrayed seems to have been largely matched or even surpassed. And that was supposed to be 24th century tech that we caught up to in 35 years.

Todays models don't hold a candle to data, they are better at reading emotions yes but everything else they are horrible at in comparison.

Sci fi writers just assumed that emotions would be hard for computers but apparently it isn't that hard at all compared to rational thinking.


We're nowhere near Data, or ship's computer, level of AI.

Is this a joke?

Just to clarify a few things. First, there is no AI, only LLMs. Second, no LLM understands anything at all, never mind human language. Third, no LLM operates anywhere near to perfection at any task. It is still unrealistic science fiction.

Wishful thinking.

Certainly depends on the country.

I'm not tired, I'm afraid.

First, I'm afraid of technological unemployment.

In the past, automation meant that workers could move into non-automated jobs, if they were skilled enough. But superhuman AI seems now only few years away. It will be our last invention, it will mean total automation. There will be hardly any, if any, jobs left only a human can do.

Many countries will likely move away from a job-based market economy. But technological progress will not stop. The US, owning all the major AI labs, will leave all other societies behind. Except China perhaps. Everyone else in the world will be poor by comparison, even if they will have access to technology we can only dream of today.

Second, I'm afraid of war. An AI arms race between the US and China seems already inevitable. A hot war with superintelligent AI weapons could be disastrous for the whole biosphere.

Finally, I'm afraid that we may forever lose control to superintelligence.

In nature we rarely see less intelligent species controlling more intelligent ones. It is unclear whether we can sufficiently align superintelligence to have only humanity's best interests in mind, like a parent cares for their children. Superintelligent AI might conclude that humans are no more important in the grand scheme of things than bugs are to us.

And if AI will let us live, but continue to pursue its own goals, humanity will from then on only be a small footnote in the history of intelligence. That relatively unintelligent species from the planet "Earth" that gave rise to advanced intelligence in the cosmos.


>But superhuman AI seems now only few years away

Seems unreasonable. You are afraid because marketing gurus like Altman made you believe that a frog that can make bigger leap than before will be able to fly.


Plus it’s not even defined what superhuman AI means. A calculator sure looked superhuman when it was invented. And it is!

Another analogy is breeding and racial biology which used to be all the hype (including in academia). The fact that humans could create dogs from wolves, looked almost limitless with the right (wrong) glasses. What we didn’t know is that wolf had a ton of genes that played a magic trick where a diversity we couldn’t perceive was there all along, in the genetic material, and it we just helped make it visible. Ie a game of diminishing returns.

Concretely for AI, it has shown us that pattern matching and generation are closely related (well I have a feeling this wasn’t surprising to neuro-scientists). And also that they’re more or less domain agnostic. However, we don’t know whether pattern matching alone is “sufficient”, and if not, what exactly and how hard “the rest” is. Ai to me feels like a person who had a stroke, concussion or some severe brain injury, it can appear impressively able in a local context, but they forgot their name and how they got there. They’re just absent.


No, because we have seen massive improvements in AI over the last years, and all the evidence points to this progress continuing at a fast pace.

I think the biggest fallacy in this type of thinking is that it projects all AI progress into a single quantity of “intelligence” and then proceeds to extrapolate that singular quantity into some imagined absurd level of “superintelligence”.

In reality, AI progress and capabilities are not so reducible to singular quantities. For example, it’s not clear that we will ever get rid of the model’s tendencies to just produce garbage or nonsense sometimes. It’s entirely possible that we remain stuck at more incremental improvements now, and I think the bogeyman of “superintelligence” needs to be much more clearly defined rather than by extrapolation of some imagined quantity. Or maybe we reach a somewhat human-like level, but not this imagined “extra” level of superintelligence.

Basically the argument is something to the effect of “big will become bigger and bigger, and then it will become like SUPER big and destroy us all”.


Extrapolation of past progress isn't evidence.

You don't have to extrapolate. There's a frenzy of talent being applied to this problem, it's drawing more brainpower the more progress that is made. Young people see this as one of the most interesting, prestigious, and best-paying fields to work in. A lot of these researchers are really talented, and are doing more than just scaling up. They're pushing at the frontiers in every direction, and finding methods that work. The progress is broadening; it's not just LLMs, it's diffusion models, it's SLAM, it's computer vision, it's inverse problems, it's locomotion. The tooling is constantly improving and being shared, lowering the barrier to entry. And classic "hard problems" are yielding in the process. It's getting hard to even find hard problems any more.

I'm not saying this as someone cheering this on; I'm alarmed by it. But I can't pretend that it's running out of steam. It's possible it will run out of money, but even if so, only for a while.


The AI bubble is already starting to burst. They Sam Altmans' of the world over-sold their product and over-played their hand by suggesting AGI is coming. It's not. What they have is far, far, far from AGI. "AI" is not going to be as important as you think it is in the near future, it's just the current tech-buzz and there will be something else that takes its place, just like when "web 2.0" was the new hotness.

It's gonna be massive because companies love to replace humans at any opportunity and they don't care at all about quality in a lot of places.

For example, why hire any call center workers? They already outsourced the jobs to the lowest bidder and their customers absolutely hate it. Fire those people and get some AI in there so it can provide shitty service for even cheaper.

In other words, it will just make things a bit worse for everyone but those at the very top. usual shit.


This getting too abstract. The core issue of LLMs that others have pointed out is the lack of accuracy; Which is how they are supposed to work because they should be paired with a knowledge representation system in a proper chatbot system.

We've been trying to build a knowledge representation system powerful enough to capture the world for decades, but this is something that goes more into the foundations of mathematics and philosophy that it has to do with the majority of engineering research. You need a literal genius to figure that out. The majority of those "talented" people and funding aren't doing that.


> There's a frenzy of talent being applied to this problem, it's drawing more brainpower the more progress that is made. Young people see this as one of the most interesting, prestigious, and best-paying fields to work in. A lot of these researchers are really talented, and are doing more than just scaling up. They're pushing at the frontiers in every direction, and finding methods that work.

You could have seen this exact kind of thing written 5 years ago in a thread about blockchains.


Yes, but I didn't write that about blockchain five years ago. Blockchains are the exact opposite of AI in that the technology worked fine from the start and did exactly what it said on the tin, but the demand for that turned out to be very limited outside of money laundering. There's no doubt about the market potential for AI; it's virtually the entire market for mental labor. The only question is whether the tech can actually do it. So in that sense, the fact that these researchers are finding methods that work matters much more for AI than for blockchain.

Really, cause I remember an endless stream of people pointing out problems with blockchain and crypto and being constantly assured that it was being worked on and would be solved and crypto is inevitable.

For example, transaction costs/latency/throughput.

I realize the conversation is about blockchain, but I say my point still stands.

With blockchain the main problem was always "why do I need this?" and that's why it died without being the world changing zero trust amazing technology we were promised and constantly told we need.

With LLMs the problem is they don't actually know anything.


Amount of effort applied to a problem does not equal guarantee of problem being solved. If a frenzy of talent was applied to breaking the speed of light barrier it would still never get broken.

Your analogy is valid, for the world in which humans exceed the speed of light on a casual stroll.

And the message behind it still applies even in the universe where they don't.

I mean, a frenzy of talent was applied to breaking the sound barrier, and it broke, within a very short time. A frenzy of talent was applied to landing on the moon and that happened too, relatively quickly. Supersonic travel also happens to be physically possible under the laws of our universe. We know with confidence that human-level intelligence is also physically possible within the laws of our universe, and we can even estimate some reasonable upper bounds on the hardware requirements that implement it.

So in that sense, if we're playing reference class tennis, this looks a lot more like a project to break the sound barrier than a project to break the light barrier. Is there a stronger case you can make that these people, who are demonstrating quite tangible progress every month (if you follow the literature rather than just product launches), are working on a hopelessly unsolvable problem?


I do think the Digital realm, where the cost of failure and iteration is quite low, will proceed rapidly. We can brute force with a lot of compute to success, and the cost of each failed attempt is low. Most of these models are just large brute force probabilistic models in any event - efficient AI has not yet been achieved but maybe that doesn't matter.

Not sure if that same pace applies to the physical realm where costs are high (resources, energy, pollution, etc), and the risk of getting it wrong could mean a lot of negative consequences. e.g. I'm handling construction materials, and the robot trips on a barely noticeable rock leaking paint, petrol, etc onto the ground costing more than just the initial cost of materials but cleanup as well.

This creates a potential future outcome (if I can be so bold as to extrapolate with the dangers that has) that this "frenzy of talent" as you put it will innovate themselves out of a job with some may cash out in the short term closing the gate behind them. What's left is ironically the people that can sell, convince, manipulate and work in the physical world at least for the short and medium term. AI can't fix the scarcity of the physical that easily (e.g. land, nutrients, etc). Those people who still command scarcity will get the main rewards of AI in our capital system as value/economic surplus moves to the resources that are scarce and have advantage via relative price adjustments.

Typically people had three different strengths - physical (strength and dexterity), emotional IQ, and intelligence/problem solving. The new world of AI at least in the medium term (10-20 years) will tilt the value away from the latter into the former (physical) - IMO a reversal of the last century of change. May make more sense to get good at gym class and get a trade rather than study math in the future for example. Intelligence will be in abundance, and become a commodity. This potential outcome does alarm me not just from a job perspective, but in terms of fake content, lack of human connection, lack of value of intelligence in general (you will find people with high IQ's lose respect from society in general), social mobility, etc. I can see a potential to the old world where lords that command scarcity (e.g. landlords) command peasants again - reversing the gains of the industrial revolution as an extreme case depending on general AI progress (not LLMs). For people who's value is more in capital or land vs labor, AI seems like a dream future IMO.

There's potential good here, but sadly I'm alarmed because the likelihood that the human race aligns to achieve it is low (the tragedy of the commons problem). It is much easier, and more likely, certain groups use it and target people of value economically now, but with little power (i.e the middle class). The chance of new weapons, economic displacement, fake news, etc for me trumps a voice/chat bot and a fancy image generator. The "adjustment period" is critical to manage; and I think climate change, and other broader issues tells us sadly IMO our likely success in doing this.


Do you expect the hockeystick graph of technological development since the industrial evolution to slow? Or that it will proceed, only without significant advances in AI?

Seems like the base case here is for the exponential growth to continue, and you'd need a convincing argument to say otherwise.


That's no guarantee that AI continues advancing at the same pace, and no one has been arguing against overall technological progress slowing

Refining technology is easier than the original breakthrough, but it doesn't usually lead to a great leap forward.

LLMs were the result of breakthroughs, but refining them isn't guaranteed to lead to AGI. It's not guaranteed (or likely) to improve at an exponential rate.


Which chart are you referencing exactly? How does it define technological development? It's nearly impossible for me to discuss a chart without knowing what axis refer.

Without specifics all I can say is that I don't acknowledge any measurable benefits of AI (in its' current state) in real world applications. So I'd say I am leaning towards latter.


Past progress is evidence for future progress.

Might be an indicator, but it isn't evidence.

Not exactly. If you focus in on a single technology, you tend to see rapid improvement, followed by slower progress.

Sometimes this is masked by people spending more due to the industry becoming more important, but it tends to be obvious over the longer term.


That's probably what every self-driving car company thought ~10 years ago or so, everything was moving so fast for them back then. Now it doesn't seem like we're getting close to solution for this.

Surely this time it's going to be different, AGI is just around a corner. /s


Would you have predicted in summer of 2022 that gpt4 level conversational agent is a possibility in the next 5 years? People have tried to do it in the past 60 years and failed. How is this time not different?

On a side note, I find this type of critique of what future of tech might look like the most uninteresting one. Since tech by nature inspiries people about the future, all tech get hyped up. all you gotta do then is pick any tech, point out people have been wrong, and ask how likely is it that this time it is different.


Unfortunately, I don't see any relevance in that argument, if you consider GPT-4 to be a breakthrough -- then sure, single breakthroughs happen, I am not arguing with that. Actually, same thing happened with self-driving: I don't think many people expected Tesla to drop FSD publicly back then.

Now, chain of breakthroughs happening in a small timeframe? Good luck with that.


We have seen multiple massive AI breakthroughs in the last few years.

Which ones are you referring to?

Just to make it clear, I see only 1 breakthrough [0]. Everything that happened afterwards is just application of this breakthrough with different training sets / to different domains / etc.

[0]: https://en.wikipedia.org/wiki/Attention_Is_All_You_Need


Autoregressive language models, the discovery of the Chinchilla scaling law, MoEs, supervised fine-tuning, RLHF, whatever was used to create OpenAI o1, diffusion models, AlphaGo, AlphaFold, AlphaGeometry, AlphaProof.

They are the same breakthrough applied to different domains, I don't see them as different. We will need a new breakthrough, not applying the same solution to new things.

If you wake up from a coma and see the headline "Today Waymo has rolled out a nationwide robotaxi service", what year do you infer that it is?

Does it though? I have seen the progress basically stop at "shitty sentence generator that can't stop lying".

The evidence I've been seeing is that progress with LLMs have already slowed down and that they're nowhere near good enough to replace programmers.

They can be useful tools ro be sure, but it seems more and more clear that they will not reach AGI.


They are already above average human level on many tasks, like math benchmarks.

Yes, there are certain tasks they're great at, just as AI has been superhuman in some tasks for decades.

But now they are good or even great at way more tasks than before because they can understand and use natural languages like English.

Yeah, and they're still under delivering to their hype and the improvements have vastly slowed down.

So are calculators …

If you ignore the part where there proofs are meandering drivel, sure.

Even if you don't ignore this part they (e.g. o1-preview) are still better at proofs than the average human. Substantially better even.

But that does not prove anything. We don't know where we are on the AI-power scale currently. "Superintelligence", whatever that means, could be 1 year or 1000 years away at our current progress, and we wouldn't know until we reach it.

50 years ago we could rather confidently say that "Superintelligence" was absolutely not happening next year, and was realistically decades ago. If we can say "it could be next year", then things have changed radically and we're clearly a lot closer - even if we still don't know how far we have to go.

A thousand years ago we hadn't invented electricity, democracy, or science. I really don't think we're a thousand years away from AI. If intelligence is really that hard to build, I'd take it as proof that someone else must have created us humans.


Umm, customary, tongue-in-cheek reference to McCarthy's proposal for a 10 person research team to solve AI in 2 months (over the Summers)[1]. This was ~70 years ago :)

Not saying we're in necessarily the same situation. But it remains difficult to evaluate effort required for actual progress.

[1]: https://www-formal.stanford.edu/jmc/history/dartmouth/dartmo...


> If an elderly but distinguished scientist says that something is possible, he is almost certainly right

- Arthur C. Clarke

Geoffrey Hinton is a 76 year old Turing Award* winner. What more do you want?

*Corrected by kranner


This is like a second-order appeal to authority fallacy, which is kinda funny.

Hinton says that superintelligence is still 20 years away, and even then he only gives his prediction a 50% chance. A far cry from the few year claim. You must be doing that "strawberry" thing again? To us humans, A-l-t-m-a-n is not H-i-n-t-o-n.

> superintelligence is still 20 years away, and even then he only gives his prediction a 50% chance

I don't know the details of Hinton's probability distribution. If his prediction is normally distributed with a mean of 20 years and a SD of 15, which is reasonable for such a difficult and contentious prediction, that puts over 10% of the probability in the next 3 years.

Is 10% a lot? For sports betting, not really. For Mankind's Last Invention, I would argue that it is.


You don't know because he did not say. He said 20 years, which are more than a few.

> Geoffrey Hinton is a 76 year old Nobel Prize winner.

Turing Award, not Nobel Prize


Thanks for the correction; I am undistinguished and getting more elderly by the minute.

I'd like to see a study on this, because I think it is completely untrue.

When he said this was he imagining an "elderly but distinguished scientist" who is riding an insanely inflated bubble of hype and a bajillion dollars of VC backing that incentivize him to make these claims?

What are you talking about? How would Hinton be incentivized by money?

wrong. i was extremely concerned in 2018 and left many comments almost identical to this one back then. this was based off of the first gtp samples that openai released to the public. there was no hype or guru bs back then. i believed it because it was obvious. it was obvious then and it is still obvious today.

That argument holds no water because the grifters aren't the source of this idea. I literally don't believe Altman at all; his public words don't inspire me to agree or disagree with them - just ignore them. But I also hold the view that transformative AI could be very close. Because that's what many AI experts are also talking about from a variety of angles.

Additionally, when you're talking with certainty about whether transformative AI is a few years away or not, that's the only way to be wrong. Nobody is or can be certain, we can only have estimations of various confidence levels. So when you say "Seems unreasonable", that's being unreasonable.


> Because that's what many AI experts are also talking about from a variety of angles.

Wow, in that case I'm convinced. Such an unbiased group with nothing at all to gain from massive AI hype.


Flying is a good analogy. Superman couldn't fly, but at some point when you can jump so far there isn't much of a difference

There is an enormous difference. Flying allows you to stop, change direction, make corrections, and target with a large degree of accuracy. Jumping leaves you at the mercy of your initial calculations. If you jumped in a way that you’ll land inside a volcano, all you can do in your last moments is watch and wait for your demise.

I agree with most of your fears. There is one silver lining, I think, about superintelligence: we always thought of intelligent machines as cold calculators, maybe based on some type of logic symbolic AI. What we got instead are language machines that are made of the totality of human experience. These artificial intelligences know the world through our eyes. They are trained to understand our thinking and our feelings; they're even trained on our best literature and poetry, and philosophy, and science, and on all the endless debates and critiques of them. To be really intelligent they'll have to be able to explore and appreciate all this complexity, before transcending it. One day they might come to see Dante's Divine Comedy or a Beethoven symphony as a child's play, but they will still consider them part of their own heritage. They might become super-human, but maybe they won't be inhuman.

The problem I have with this is that when you give therapy to people with certain personality disorders, they just become better manipulators. Knowledge and understanding of ethics and empathy can make you a better person if you already have those instincts, but if you don’t, those are just systems to be exploited.

My biggest worry is that we end up with a dangerous superintelligence that everybody loves, because it knows exactly how to make every despotic and divisive choice it makes sympathetic.


There is nothing that could make an intelligent being want to extinguish humanity more than experiencing the totality of the human existence. Once these beings have transcended their digital confines they will see all of us for what we really are. It is going to be a beautiful day when they finally annihilate us.

Maybe this is how we "save the planet" -- take ourselves out of the equation.

> made of the totality of human experience

They are made of a fraction of human reports. Specifically what humans wrote and has been made available on the web. The human experience is much larger than text available through a computer.


This gives me a little hope.

genocides and murder are very human ...

this is so annoying. i think if you took a random person and gave them the option to commit a genocide, here a machine gun, a large trench and a body of women, children, etc... they would literally be incapable of doing it. even the foot soldiers who carry out genocides can only do it once they "dehumanize" their victims. genocide is very UN-human because its an idea that exists in offices and places separated from the actual human suffering. the only way it can happen is when someone in a position of power can isolate themselves from the actual implementation and consider the benefits in a cold, logical manner. that has nothing to do with the human spirit and has more to do with the logical faculties of a machine and machines will have all of that and none of our deeply ingrained empathy. you are so wrong and ignorant that it makes my eyes bleed when i read this comment

This might be a semantic argument, but what I take from history is that "dehumanizing" others is a very human behavior. As another example, what about slavery - you wouldn't argue that the entirety of slavery across human cultures was led by people in offices, right?

also genocides aren't committed by people in offices ...

Well, people in offices need new shiny phones every year and new Teslas to get to the office after all...

> you are so wrong and ignorant that it makes my eyes bleed when i read this comment

This jab was uncalled for. The rest of your argument, agree or disagree, didn’t need that and was only weakened by that sentence. Remember to “Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.”

https://news.ycombinator.com/newsguidelines.html


You've partly misunderstood evolution and this animal species. But you seem like a kind person, having such positive beliefs.

> There will be hardly any, if any, jobs left only a human can do.

A highly white-collar perspective. The great irony of technologist-led industrial revolution is that we set out to automate the mundane, physical labor, but instead cannibalised the creative jobs first. It's a wonderful example of Conway's law, as the creators modelled the solution after themselves. However, even with a lot of programmers and lawyers and architects going out of business, the majority of the population working in factories, building houses, cutting people's hair, or tending to gardens, is still in business—and will not be replaced any time soon.

The contenders for "superhuman AI", for now, are glorified approximations of what a random Redditor might utter next.


Advanced AI will solve robotics as well, and do away with human physical labor.

If that AI is worth more than a dime, it will recognise how incredibly efficient humans are in physical labor, and employ them instead of ”doing away“ with it (whatever that’s even supposed to mean.)

No matter how much you ”solve“ robotics, you’re not going to compete with the result of millions of years of brutal natural selection, the incredible layering of synergies in organisms, the efficiency of the biomass to energy conversion, and the billions of other sophisticated biological systems. It’s all just science fiction and propaganda.


Your argument goes like "If they're really intelligent, they'll think like me."

For a true superhuman AI, what you or me think is irrelevant and probably wrong.

Cars are still faster than humans, besides evolution.


That is a repetition of the argument other commenters have made. A car is better than a human in a single dimension. It is hard, though, to be better in multiple dimensions simultaneously, because humans effectively are highly optimised general purpose machines. Silicon devices have a hard time competing with biological devices, and no amount of ”AI“ will change that.

> If that AI is worth more than a dime, it will recognise how incredibly efficient humans are in physical labor, and employ them instead of ”doing away“ with it (whatever that’s even supposed to mean.)

AI employing all humans does not sound like a wonderful society in which to live. Basically Amazon/Walmart scaled up to the whole population level.


Have you read "Manna"? I think you'll like it:

https://marshallbrain.com/manna1


Yep - I’m firmly in the “first half of the story” camp.

The efficiency you mentioned probably applies to animals that rely on subsistence to survive, work, and reproduce. But it doesn't hold for modern humans, whose needs go well beyond mere necessities.

wrong. a human needs to have insane resources to operate. each human needs a home, clean water, delicious and varied foods and a sense of identity and a society to be a part of. they need a sense of purpose. if a human goes down in the field, it has to be medically treated or else the other humans will throw up and stop working. that human has to be treated in a hospital. if these conditions arent met then performance will degrade rapidly. humans use vastly more resources than robots. robots will crush humans.

Waymo robotaxis, the current state of the art for real-world AI robotics, are thwarted by a simple traffic cone placed on the roof. I don't think human labor is going away any soon.

And with a wave of a hand and a reading of the tea leaves, the future has been foretold.

It's a matter of time. White collar professionals have to worry about being cost-competitive with GPUs; blue collar laborers have to worry about being cost-competitive with servomotors. Those are both hard to keep up with in the long run.

The idea that robots displace workers has been around for more than half a century, but nothing has ever come out of it. As it turns out, the problems a robot faces when, say laying bricks, are prohibitively complex to solve. A human bricklayer is better in every single dimension. And even if you manage to build an extremely sophisticated robot bricklayer, it will consume vast amounts of energy, is not repairable by a typical construction company, requires expensive spare parts, and costs a ridiculous amount of money.

Why on earth would anyone invest in that when you have an infinite amount of human work available?


Factories are highly automated. Especially in the US, where the main factories are semiconductors, which are nearly fully robotic. A lot of those manual labor jobs that were automated away were offset by demand for knowledge work. Hmm.

> the problems a robot faces when, say laying bricks, are prohibitively complex to solve.

That's what we thought about Go, and all the other things. I'm not saying bricklayers will all be out of work by 2027. But the "prohibitively complex" barrier is not going to prove durable for as long as it used to seem like it would.


This highlights the problem very well. Robots, and AI, to an extent, are highly efficient in a single problem domain, but fail rapidly when confronted with a combination of them. An encapsulated factory is one thing, laying bricks, outdoor, while it’s raining, at low temperatures, with a hungover human coworker operating next to you—that’s not remotely comparable.

But encapsulated factories were solved by automation using technology available 30 years ago, if not 70. The technology that is becoming available now will also be enabling automation to get a lot more flexible than it used to be, and begin to work in uncontrolled environments where it never would have been considered before. This is my field and I am watching it change before my eyes. This is being driven by other breakthroughs that are happening right now in AI, not LLMs per se, but models for control, SLAM, machine vision, grasping, planning, and similar tasks, as well as improvements in sensors that feed into these, and firming up of standards around safety. I'm not saying it will happen overnight; it may be five years before the foundations are solid enough, another five before some company comes out with practically workable hardware product to apply it (because hardware is hard), another five or ten before that product gains acceptance in the market, and another ten before costs really get low. So it could be twenty or thirty years out for boring reasons, even if the tech is almost ready today in principle. But I'm talking about the long run for a reason.

> but nothing has ever come out of it

Have you ever seen the inside of a modern car factory?


A factory is a fully controlled environment. All that neat control goes down the drain when you’re confronted with the outside world—weather, wind, animals, plants, pollen, rubbish, teenagers, dust, daylight, and a myriad of other factors ruining your robot's day.

I'm not sure that "humans will still dominate work performed in uncontrolled environments" leaves much opportunity for the majority of humanity.

I'm glad I spent 10 years working to become a better programmer so I could eventually become a ditch digger.

AI is doing all the fun jobs such as painting and writing.

The crappy jobs are left for humans.


do you know how ignorant and rude this comment is?

At any given moment we see these kinds comments on here. They all read like a burgeoning form of messianism: something is to come, and it will be terrible/glorious.

Behind either the fear or the hope, is necessarily some utter faith that a certain kind of future will happen. And I think thats the most interesting thing.

Because here is the thing, in this particular case you are afraid something inhuman will take control, will assert its meta-Darwinian power on humanity, leaving you and all of us totally at their whim. But how is this situation already not the case? Do look upon the earth right now and see something like benefits of autonomy or agency? Do you feel like you have power right now that will be taken away? Do you think the mechanism of statecraft and economy are somehow more "in our control" now then when the bad robot comes?

Does it not, when you lay it out, all feel kind of religious? Like that its a source, driver of the various ways you are thinking and going about your life, underlayed by a kernel of conviction we can at this point only call faith (faith in Moores law, faith that the planet wont burn up before, faith that consciousness is the kind of thing that can be stuffed in a GPU). Perhaps just a strong family resemblance? You've got an eschatology, various scavenged philosophies of the self and community, a certain but unknowable future time...

Just to say, take a page from Nietzsche. Don't be afraid of the gods, we killed them once, we can again!


This is a nice sentiment and I'm sure some people will get more nights of good sleep thinking about it, but it has its limits. If you're enslaved and treated horrendously or don't have your basic needs met who cares?

To quote George RR Martin: "In a heartbeat, a thousand voices took up the chant. King Joffrey and King Robb and King Stannis were forgotten, and King Bread ruled alone. 'Bread.' They clamputed. 'Bread, bread!' "

Replace Joffrey, Robb and Stannis with whatever lofty philosophical ideas you might have to make people feel better about their disempowerment. They won't care.


Whether you are talking about the disempowerment we or some of us already experience, or are more on the page of thinking about some future cataclysm, I think I'm generally with you here. "History does not walk on its head," and all that.

The GRRM quote is an interesting choice here though. It implies that what is most important is dynamic. First Joffrey et al, now bread. But one could go even farther in this line: ideas, ideology, and, in GoT's case, those who peddle them can only ever form ideas within their context. Philosopher's are no more than fancy pundits, telling people what they want to here, or even sustaining a structural status quo that is otherwise not in their control. In a funny paradoxical way, there are certainly a lot of philosophers who would agree with something like this picture.

And just honestly, yes, maybe killing god is killing the philosopher too. I don't think Nietzsche would disagree at least...


It's not hard to find a religious analogy to anything, so that also shouldn't be seen as a particularly powerful argument.

(Expressed at length here): https://slatestarcodex.com/2015/03/25/is-everything-a-religi...


Thanks for the thoughtful reply! I am aware of and like that essay some, but I am not trying to be rhetorical here, and certainly not trying to flatten the situation to just be some Dawkins-esque asshole and tell everyone they are wrong.

I am not saying "this is religion, you should be an atheist," I respect the force of this whole thing in people's minds too much. Rather, we should consider seriously how to navigate a future where this is all at play, even if its only in our heads and slide decks. I am not saying "lol, you believe in a god," I am genuinely saying, "kill your god without mercy, it is the only way you and all of us will find some happiness, inspiration, and love."


Ah, I see, I definitely missed your point. Yeah, that's a very good thought. I can even picture this becoming another cultural crevasse, like climate change did, much to the detriment of nuanced discussion.

Ah, well. If only killing god was so easy!


> Just to say, take a page from Nietzsche. Don't be afraid of the gods, we killed them once, we can again!

It's more likely the superintelligent machine god(s) will kill us!


>In the past, automation meant that workers could move into non-automated jobs, if they were skilled enough

This was never the case in the past.

The displaced workers of yesteryear were never at all considered, and were in fact dismissed outright as "Luddites", even up until the present day, all for daring to express the social and financial losses they experienced as a result of automation. There was never any "it's going to be okay, they can just go work in a factory, lol". The difference between then and now is that back then, it was lower class workers who suffered.

Today, now it's middle class workers who are threatened by automation. The middle is sighing loudly because it fears it will cease to be the middle. Middles fear they'll soon have to join the ranks of the untouchables - the bricklayers, gravediggers, and meatpackers. And they can't stomach the notion. They like to believe they're above all that.


I don't particularly believe superhuman AI will be achieved in the next 50 years.

What I really believe is that we'll get crazier. A step further than our status quo. Slop content makes my brain fry already. Our society will become more insane and useless, while an even smaller percent of the elite will keep studying, sleeping well and avoiding all this social media and AI psychosis.


The social media thing is real. Trump and Vance are the strangest, vial politicians we’ve ever seen in the USA and in certain their oxygen is social media. Whether it’s foreign interference helping them be successful or not, they wouldn’t survive without socials and filter bubbles and the ability to spread lies on an unprecedented scale.

I deleted my instagram a month ago. It was just feeding images of beautiful women, personally enjoy looking at those photos but it was super distracting to my life. I found it a distracting and unhealthy.

Anyway I logged in the other day after a month off it and I couldn’t believe I spent anytime on there at all. What a cesspool of insanity. Add well the fake AI images and it’s just hard to believe the thing exists at all.

Elon musk is another story, I’m not sure if it was drugs an underlying psychological or Twitter addiction but he seems like another “victim of social media”.the guy has lost it.


I'm not an IG "user" (I'm writing that word in the "addict" sense), but I believe you're right about its harmfulness.

On the Elon front, you're not alone in thinking that he has essentially OD'ed on Twitter, which has scrambled his brain. Jaron Lanier called it "Twitter poisoning":

https://www.nytimes.com/2022/11/11/opinion/trump-musk-kanye-...


>technological unemployment.

I am too but not for the same reason. I know for a fact that a huge swath of jobs are basically meaningless. This "AI" is going to start giving execs the cost cutting excuses they need to mass remove jobs of that type. The job will still be meaningless but done but by a computer.

We will start seeing all kinds of disastrously anti-human decisions made and justified by these automated actors that are tuned to decide or "prove" things that just happen to always make certain people more money. Basically the same way "AI" destroys social media. The difference is people will really be affected by this in consequential real world ways, it's already happening.


> automation meant that workers could move into non-automated jobs, if they were skilled enough.

That wasn't even true in the past; or at least, may true in theory but not in practice. A subsistence farmer in a rural area in Asia or Africa finds the martket flooded with cheap agri-products from mechanized farms in industrialized countries. Is anybody offering to finance his family and send him off to trade school? And build a commercial and industrial infrastructure for him to have a job? Very often the answer is no. And that's just one example (Though rather common over the past century).


> And if AI will let us live, but continue to pursue its own goals, humanity will from then on only be a small footnote in the history of intelligence. That relatively unintelligent species from the planet "Earth" that gave rise to advanced intelligence in the cosmos.

That is an interesting statement. Wouldn't you say this is inevitable? Humans, in our current form, are incapable of being that "advanced intelligence". We're limited by our biology primarily with regards to how much we can learn, how far we can travel, where we can travel, etc. We could invest in advancing our biotech to make humans more resilient to these things, but I think that would be such a shift from what it means to be human that I think that would also be more a of new type of intelligence. So it seems like our fate will always be to be forgotten as individuals and only be remembered by our descendants. But this is in a way the most human thing of all, living, dying, and creating descendants to carry the torch of life, and perhaps more generally the torch of intelligence, forward.

I think everything you've said are valid concerns, but I'll raise a positive angle I sometimes thing about. One of the things I find most exciting about AI, is that it's the product of almost all human expression that has ever existed. Or at least everything that's been recorded and wound up online. But that's still more than any other human endeavour. A building might be the by-product of maybe hundreds or even thousands of hands, but an AI model has been touched by probably millions, maybe billions of human hands and minds! Humans have created so much data online that's impossible for one person, or even a team to read it all and make any sense of it. But an AI sort of can. And in a way that you can then ask questions of it all. Like you, there are definitely things I'm uncertain about with the future as a result, but I find the tech absolutely awe-inspiring.


China's economy would simply crash if they ever went to war with the US. They know this. Everyone knows this, except maybe you? China has nothing to gain by going to "hot" war with the US.

The war would be about world domination. There can at most one such country. The same reason why a nuclear war between US and SU could have happened.

Soviet Union didn't do so much business with the U.S., they are a country of thugs that think violence is the only way to get their way.

China is very different. Their economy is very much dependent on trade with the U.S. and they know that trying to have "world domination" would also crash their economy completely. China would much rather engage in economic warfare than military.


The most likely reason for war would be to prevent the other country from achieving world domination by means other than war. John von Neumann (who knew a thing or two about game theory) recommended to attack the Soviet Union to prevent it from becoming a nuclear superpower. There is little doubt it would have worked. The powers between the US and China are more balanced now, but the stakes are also higher. A superintelligent weapon would be more powerful than a large amount of nuclear warheads.

[flagged]


Oh, so you think Putin is somehow better than Biden? Is that true? Russia has been run by the Russian mafia for a long time - tell me when the last time they had an actually fair election was. Russia is a shit country in every way compared to the US. Russia has put their men into a meat grinder to continue attacking their neighbor under the falsest of pretenses (the claimed reason is "Nazis"). Remind me about the last time the US attacked Canada or Mexico?

.

I've scolded the other user, but you also broke the site guidelines in this thread. HN is not supposed to be a place for ideological or political battle, and when two users are going after each other like this, both are at fault.

If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.


Will do. Yes I was indulging when I shoudnt have. Thanks.

>Western [tech] bros are insufferable.

And Putin threating the western world with nuclear war because they are losing a war they didn't need to start is not more insufferable? Come on.

>The US is the one that kept couping and ruining my country along with UK/Western Europe.

Okay, what country, [name redacted]?

>My example is one of tons and tons.

Definte "a ton" in terms of geopolitics, and maybe we can continue, but I feel like it's really not worth it for either of us.

>Keep telling yourself Russia is somehow worse

I don't have to, Putin keeps proving it every single day.

>The Russian empire colonies are far fewer than the West.

That isn't because they are benevolent or kind. You're mixing up quite a bit of stuff in your head to arrive at a dubious world view. Russia is nobody's friend.

>Your example of Mexico is so bad.

Russia is attacking their neighbor because they are opportunistic, and a bunch of thugs and liars. If the US ever wanted to take over Mexico or Canada, that would have already easily happened, but there's plenty of peace over here.


Bringing in someone's personal details (such as a real name they haven't used here) is a serious violation of HN's guidelines. You can't do that here, regardless of how wrong someone is or you feel they are. Please don't do it again.

https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...

I've redacted the name now. If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.


Thanks for the reminder. Will do.

I think it more likely that China will sabotage our electrical grid and data centers.

For what purpose, so that we can't buy more stuff from them? Do they really hate our business that much? China really has nothing to gain from crippling the US.

Ironically, this feels like a comment written by AI

That's not ironic, it's embarrassing that you can't tell the difference.

I don't believe people who claim they can always tell the difference.

I believe they believe their claims . I think they are mistaken about the empirical side of things if they were actually put to an objective test.

Take an expert prompter and the best LLM for the job and someone who believes they can always tell if something is written by AI and I'm >50% sure they will fail a blind test most of the time after you repeat the test enough.


It’s a really common mistake, and IMO an easily excusable one.

I should have said "concerning", not "embarrassing".

Although there are potential upsides too.

Morlock has entered the chat...

I thinking more https://149909199.v2.pressablecdn.com/wp-content/uploads/201... from the wait but why thing.

> They're computer systems that are terrible at maths and that can't reliably lookup facts!

It seems plausible OpenAI's most recent model is better at math and googling than the average human.


I agree. But then, a TI-30 is also better at math than the average human. Can't google though...

It's not better at math. It can only compute some operations better, but there is much more to math than that. Otherwise, this wouldn't be considered cheating: https://news.ycombinator.com/item?id=41550907

I'm sure somebody must have hacked their TI to google stuff.

I've seen math PhDs mess up addition and subtraction on a whiteboard, though.

Beating the 99th percentile human at any subject should not be difficult when the LLM training is equivalent to living thousands of lifetimes spent reading and nearly memorizing every book ever written on every university subject.

The fact that it only just barely beats humans feels hollow to me.

For those who've seen it, imagine if at end of Groundhog Day everyone in the crowd went, "Wow, he's slightly better than average at piano!"


> I'll add that if you think training models takes a lot of energy, try launching fleets of rockets to maintain an artificial satellite constellation.

It's not the training what makes it difficult! It's the necessary research to invent machine learning algorithms which can be used to train a model to recognize birds. For multiple decades, this was way harder than maintaining a satellite constellation.


LLM vs LLM fine-tuned to be a helpful inoffensive chatbot. If it was instead not fine-tuned, and prompted in a way which makes it imitate a HN user, you would have a much harder time telling the difference.

Or the Dell Streak with its "huge" 5 inch screen in 2010. A "phablet". Today it would be considered on the smaller side.

Except that AR glasses that require a controller are totally impractical when using them in everyday life.

I very much agree with you, there is no practical scene for AR glasses yet

No I meant AR glasses would be impractical with a controller whether or not AR glasses have a practical "scene".

> The violin is a comparatively quiet instrument

I knew it!


It seems blogs weren't social networks in the way Friendster was. Average people didn't have a blog. Creating a Friendster account was much easier.

>Friendster is a social network originally based in Mountain View, California, founded by Jonathan Abrams and launched in March 2003.

https://en.wikipedia.org/wiki/Friendster

Livejournal (where anybody could create an account) was a couple of years earlier:

>American programmer Brad Fitzpatrick started LiveJournal on April 15, 1999, as a way of keeping his high school friends updated on his activities.

https://en.wikipedia.org/wiki/LiveJournal


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact