AstonJ

AstonJ

How useful do you think AI tools are/will be?

AI has been a hot topic here on Devtalk recently, so along that theme: How useful do you think AI dev tools are right now and how useful do you think they might become?

There are multiple polls below, please vote in each one :icon_biggrin:

How useful do you think AI tools are to developers right now?

  • Not useful
  • Somewhat useful
  • Moderately useful
  • Very useful
  • So useful that I consider them a necessity
0 voters

How useful do you think AI tools will be 5 years from now?

  • Not useful
  • Somewhat useful
  • Moderately useful
  • Very useful
  • So useful that they would be considered a necessity
0 voters

How useful do you think AI tools will be 10 years from now?

  • Not useful
  • Somewhat useful
  • Moderately useful
  • Very useful
  • So useful that they would be considered a necessity
0 voters

How useful do you think AI tools will be 15 years from now?

  • Not useful
  • Somewhat useful
  • Moderately useful
  • Very useful
  • So useful that they would be considered a necessity
0 voters

How useful do you think AI tools will be 20+ years from now?

  • Not useful
  • Somewhat useful
  • Moderately useful
  • Very useful
  • So useful that they would be considered a necessity
0 voters

Most Liked

mindriot

mindriot

10-20 years out is a long long guess. Small pool of results so far but I was a bit surprised that although being pretty skeptical about the claims I tended to think they are more useful in the near term and less useful in the long term. I have pretty deep concerns about the ability for these things to learn or keep up with improvements with the current level of usefulness already resulting in a deluge of slop being the learning pool of future training of statistical models. The long term feels to me like it needs another paradigm breakthrough in the order of the current LLM hype to even keep these things as useful as they were before people started using them.

mindriot

mindriot

Well I’m not saying anything for sure about the future. What I am saying is that the breakthrough that just happened was a question of consuming all the content on earth to train on (with questionable respect to who owns said content). So now we have statistical models that output the average of what was available (or in other words the bottom 20% in terms of quality, if you’re lucky). Then everyone uses said tools to massively increase the quantity of content that is in the bottom end of quality. So we are dramatically shrinking the statistical impact of new valuable information to train on relative to junk being emitted into the digital world.. and we have consumed what it took to get here.

Let’s just say we keep going and it does get better. Some tomorrow there is a new security issue, deep fundamentals like everything we do is a sql injection attack level crisis on every request. Who is going to get this to a statistically significant data point to actually influence the training? Not anyone using AI tools right. (On this point, copilot has flagged several hundred prs I’ve created in work repos over it not knowing about new features in languages or frameworks and it has only been a reviewer on those repos since early this year - it has to be fair also made a few reasonable suggestions but those don’t relate to this specific point about it saying random outdated nonsense that is objectively false).

Further, how any new technology or approach become part of the solution? We end up in a world where everyone uses a tool based on the current situation to create the next situation and how it sees the world dominates what it can learn from to the exclusion of other inputs unless someone intervenes.

Which brings a me to the next point, what happens when these companies don’t want to lose millions per day setting fire to rainforests to power this hugely inefficient monster of mediocrity? The obvious move is they sell training bias to influence what people use as a form of advertising.

On the other hand, let’s pretend that it doesn’t play out like this. It is some exponential curve of god like power that takes over all need to do any work. Who is going to buy this? Who are their customers? When you create a situation where some 60%-70% of the workforce becomes jobless in a 5 year window like the hype sells, who are you selling your product to? Best case is the collapse of capitalism, but it could approach an extinction level event if played out like hacker news and twitter seems to sell CEOs.

Whatever way, I see the current bump as a short term thing. Either it balances out or we are all living mad max style in a few years. No one really wins because neither the idea or advantage is sustainable. (But I’m not saying anything about the future, every day it seems like some significant section of humanity is doing something more stupid than I thought possible… maybe for the second time)

joeb

joeb

Thanks. I am probably an old head in the sense that I still want to do most of my coding and thinking, and not just depend on an AI assistant to do most of my work.

Where Next?

Popular Ai topics Top

AstonJ
I saw this clip of Elon Musk talking about AI and wondered what others think - are you looking forward to AI? Or do you find it concerning?
New
AstonJ
Watching any? Any favourites? :upside_down_face:
New
AstonJ
This video about multi-agent AI is a really nice watch - it only took them a few million tries to master certain strategies - doing much ...
#ai
New
AstonJ
Loads of news stories about DeepSeek here in the last few days, no surprise as it’s been making headlines across the world! Currently a h...
New
AstonJ
Curious what kind of results others are getting, I think actually prefer the 7B model to the 32B model, not only is it faster but the qua...
New
kammy
Hi everyone! The other day I was having a debate with my friends about whether or not the top LLM models are “good at design.” I’d love ...
New
AstonJ
Tucker: You’ve had complaints from one programmer who said you steal people’s stuff without paying them and he winded up being murdered.
New
xiji2646-netizen
I’ve been following Seedance 2.0 since ByteDance dropped it in February, and after a few weeks of testing through third-party APIs, I wan...
New
xiji2646-netizen
Just went through the Anthropic migration guide for Opus 4.7 and there are more gotchas than the announcement implied. Curious if others ...
New
xiji2646-netizen
Anthropic launched Claude Design this week and there’s a lot of noise about the generation demos and the stock reaction. But the feature ...
New

Other popular topics Top

New
AstonJ
Just done a fresh install of macOS Big Sur and on installing Erlang I am getting: asdf install erlang 23.1.2 Configure failed. checking ...
New
AstonJ
Do the test and post your score :nerd_face: :keyboard: If possible, please add info such as the keyboard you’re using, the layout (Qw...
New
PragmaticBookshelf
Tailwind CSS is an exciting new CSS framework that allows you to design your site by composing simple utility classes to create complex e...
New
New
PragmaticBookshelf
Create efficient, elegant software tests in pytest, Python's most powerful testing framework. Brian Okken @brianokken Edited by Kat...
New
New
PragmaticBookshelf
Author Spotlight: VM Brasseur @vmbrasseur We have a treat for you today! We turn the spotlight onto Open Source as we sit down with V...
New
First poster: AstonJ
Jan | Rethink the Computer. Jan turns your computer into an AI machine by running LLMs locally on your computer. It’s a privacy-focus, l...
New
CommunityNews
A Brief Review of the Minisforum V3 AMD Tablet. Update: I have created an awesome-minisforum-v3 GitHub repository to list information fo...
New