I’ve read this so many times in the past few days that I’m just going to write this. As I see it using what we have available right now (which isn’t “AI” in any meaningful way) to do simple math is weird since we already have calculators for that.
Meanwhile, me, who’s at best absolute shit at python, just made a calculator with a rudimentary UI in about 45 minutes using nothing but an AI and ctrl+c/v and some sorting out the bits, as it were.
So far the math has checked out on that calculator, too.
It’s not like I don’t have a basic calculator to test the output, is it?
I might’ve also understated my python a little bit, as in I understand what the code does. Obviously you could break it, that wasn’t the point. I was more thinking that throwing math problems at what is essentially a language interpreter isn’t the right way to go about things. I don’t know shit though. I guess we’ll see.
If you want to learn how to code, writing a calculator with a ui isn’t a bad idea. But then you should code it yourself because otherwise you won’t learn much.
If you want to try and see if llms can write code that executes, then fine, you succeeded. I absolutely fail to see what you gain from that experiment though.
That might be the underlying problem. Software project management around small projects is easy. Anything that has a basic text editor and a Python interpreter will do. We have all these fancy tools because shit gets complicated. Hell, I don’t even like writing 100 lines without git.
A bunch of non-programmers make a few basic apps with ChatGPT and think we’re all cooked.
No doubt, I was merely suggesting that throwing math problems might not have been the intended use for what is essentially a language interpreter, obviously depending on the in question.
I’ve read this so many times in the past few days that I’m just going to write this. As I see it using what we have available right now (which isn’t “AI” in any meaningful way) to do simple math is weird since we already have calculators for that.
Meanwhile, me, who’s at best absolute shit at python, just made a calculator with a rudimentary UI in about 45 minutes using nothing but an AI and ctrl+c/v and some sorting out the bits, as it were.
So far the math has checked out on that calculator, too.
It’s not like I don’t have a basic calculator to test the output, is it?
I might’ve also understated my python a little bit, as in I understand what the code does. Obviously you could break it, that wasn’t the point. I was more thinking that throwing math problems at what is essentially a language interpreter isn’t the right way to go about things. I don’t know shit though. I guess we’ll see.
I have no idea what you’re trying to say here.
If you want to learn how to code, writing a calculator with a ui isn’t a bad idea. But then you should code it yourself because otherwise you won’t learn much.
If you want to try and see if llms can write code that executes, then fine, you succeeded. I absolutely fail to see what you gain from that experiment though.
Expand that into 10k line custom programs and you’ll begin having nightmarish issues.
That might be the underlying problem. Software project management around small projects is easy. Anything that has a basic text editor and a Python interpreter will do. We have all these fancy tools because shit gets complicated. Hell, I don’t even like writing 100 lines without git.
A bunch of non-programmers make a few basic apps with ChatGPT and think we’re all cooked.
No doubt, I was merely suggesting that throwing math problems might not have been the intended use for what is essentially a language interpreter, obviously depending on the in question.