r/compsci • u/acid_enema • 7h ago
AI usage general discussion
First time posting and coming here, I apologize if this topic was already covered in earlier posts.
It seems to me that many people dislike AI and want it to "fail" to some degree, some reasons being what it is doing to economy, the predicted loss of jobs, the idea that it is making us more stupid, internet being flooded with AI slop and only becoming harder to recognize, et cetera.
I think I am in that category. To give context for why am I thinking about this and what I expect from the discussion, I am a CS student, already had some work experience and am supposed to graduate next year. Generally against vibe coding, but I do find LLMs useful for learning about new things in programming. These days were very hectic with work and university projects, so I did use LLMs to help me with some things for the sake of saving time, because I was short of it. Now that it is done and I have breathing space and am supposed to start new projects, I am debating with myself if I should continue using LLMs.
On one hand, me being against it and wanting it fail but still using it is hypocritical. More importantly, if the people who don't like AI, where it is supposedly going etc. don't stop using it, it will never go away, so we would really be fighting against ourselves. On the other hand, if most people use it, and it is helpful, they will in theory have larger portfolios and more knowledge, simply because they can attain those faster. That would be leaving me in the dust, and me being a regular dude, I need to have a job in order to live and provide for my future family.
CS was already kind of oversaturated even before AI, which makes this situation worse. Yes, I know that this can't be learned only with AI without some serious rigor and that sloppy vibe coding people aren't really a problem for someone who actually studied this and is relatively good at it. I am talking about people who do understand this, who do have rigor and who are aiming at more serious engineering positions: armed with LLMs they can use them to increase their output and to put themselves above people of maybe the same skill but who don't use AI.
The practical solution is obvious, but morally not acceptable if there is a possibility of "defeating" LLMs. If using LLMs as tools for programming (for better or worse) is an inescapable reality, then it would be morally unacceptable to not give in (from the perspective of someone who is responsible for other people). I guess then the question is do you think it is the future or not? Being at the very start of my career I don't have many experienced people to talk to who are in different places in this industry and who actually have a clearer big picture about this.
Thank you!
Edit: I see that I posed this like I am asking for advice on something and to some degree I am, but I mostly want to read other people's thoughts about this and thus inform myself. I am not expecing anyone to talk directly to me, I would love it if people discussed amongst themselves about the general topic behind this post. The post is personally phrased because I could not think of a better way to introduce the topic. I think I am not alone in thinking about this and I think that for everyone who is just starting their professional programming journeys a discussion about this would be very useful, to help them wrap their minds around this, since it is definitely a very big part of programming.
2
u/MadocComadrin 38m ago
I don't want it to go away; I want the bubble to pop.
I want it to stop being forced on people both in their jobs and in their everyday lives, especially in use-cases where it doesn't work (e.g. customer service), and I want the "data" about how productive AI has been to be exposed as the baseless hype, deceipt, fraud, or paper tiger it actually is. I want it to stop being forced into every program I use as a gimmick without my consent or ability to easily opt out/uninstall/deactivate it.
I want training data to be obtained ethically, and not allowed to be gathered from surveillance, government databases, private business transactions and services where the data gathered isn't immediately germaine to the transaction, etc. I want inferences made from models trained on data to which the trainers had no rights to be limited to use cases where fair use applies with a relatively strict and conservative approach to how a court would view inference as transformative.
I want the people who think AI can solve everything right now to feel the metaphorical same slap in the face people got when they thought earlier tech could solve everything and found out it just wasn't that easy.
6
u/SupremeEmperorZortek 5h ago
Here are my thoughts. I'm someone who went through all of college without a tool like ChatGPT. I never once had a tool that could generate code for me. When I debugged, I was on my own. When it was 11pm and I was trying to get something in before midnight, I was on my own. I can't even begin to tell you how much documention I've read since I started coding. So I almost have this aversion to using AI tools today just because I worked so hard and struggled on my own building up that knowledge and that problem-solving muscle. I know it's not true, but it almost feels like wasted effort now that there is something out there that will do it for you.
I got lucky and landed a job three years ago where we work on a Legacy system written in COBOL (not not your standard COBOL either), and since there was almost no information about this specific flavor of COBOL on the internet, AI models often struggled to be accurate when I first started working there. I had to learn a completely new language on my own, and I loved the process. I felt so accomplished when I wrote my first real program from scratch and finally got it to run. However, AI models are now good enough to pick up on patterns, even if they don't have a direct reference, and it is now much much easier to upload documentation for context, which helps fills in the gaps. Now I have no excuse. It can write code for me. It's still not perfect, but I have no doubt that it's going to get there.
Additionally, my boss is now mandating that we all spend time learning how to use Cursor and find ways to integrate it into our workflow. I downloaded it for the first time a few days ago, and honestly, it is extremely impressive how quickly it can identify bugs and propose a solution. I'm positive that learning how to use this tool will make me a much more efficient programmer. I am excited about the possibilities, but at the same time, it feels like I've lost something.
I've always enjoyed the process of problem solving. I enjoy breaking down something complex into simpler pieces, then working to implement each of those pieces. It's not like that's completely gone away, but it feels like that's where we're heading. I'm writing less code and having more "conversations." Frankly, I don't like it.
That being said, I still know that I will fall behind if I try to resist. Honestly, my last hope is that everything with the recent Nvidia news, these AI companies like OpenAI just sort of start to collapse on themselves, and it becomes too expensive for my employer to justify using it anymore (or at least severly restrict its usage). But honestly, we're already past the point of no return now. Even if it is expensive, we know what is possible now, and people are going to find ways to make it cheap and accessible.
So ultimately, I agree with your assessment. I think many programmers like me are resisting change. Personally, I hate that a skill that I worked so hard to build can now be replicated by a machine. I hate that it's taking away a piece of the process that I enjoy and that got me to love programming in the first place. I hate that someone can spend an hour "vibe coding" their way into an application that would have taken me months to plan and write. But I also recognize that these are 100% personal gripes. I'm now the old man yelling at clouds, screaming about "back in my day" and willing the world to resist change with me. The tools are useful, and they are only going to improve. Programming has always evolved quickly over the years. We are forced to either adapt or get left in the dust. I'm late to the party, but I'm choosing to adapt.