

My college does the same, but depends on the professor. Most would sit in the hallway and just chill.


My college does the same, but depends on the professor. Most would sit in the hallway and just chill.


Oh yeah, I don’t play game on it, we only watches YouTube and movies.
So GSConnect actually is a way better experience than any controller in my use case :)


I have a home theater PC and I was really interested in making it have a “TV interface”, but it turns out just the standard dash to dock (on the left), gsconnect, with big scaling and big icons works just fine, if not better :)


Is the same effect achievable by a local program in user space/container confinement?
I certainly cannot inspect every program I have to run. In fact, most of the modern program have a deep supply chain, and I cannot make sure that there is no point on the supply chain that want to get root :(


okay probablistic program is actually super cool. Let’s not confuse that with random slop.


I am sorry but the very picture about wondering whether things are AI or not looks terribly like AI…


Boy, you thought blocking ml and hexbear is enough to get away from this bullshit. Sigh…
Yeah, unfortunately my class do not train student in essay writing. I do require students to write a proposal for their project which details the motivation and spec of their project. I feel a large amount of them are mostly not AI written.
While it is very easy to trick chatgpt 3.5 into submission, modern models, especially paid ones are hard to trick while not giving students without AI an disadvantage.
So the alternative is making the class very verbose and/or require much deeper understanding and novelty that is beyond the scope of a introductory class (which most undergrad/grad classes are).
For now, what I am doing is just making the homework optional or worth very little, and grade based on exams, quiz, participation, and projects. Since everyone will get perfect score on homework anyway, so there is no point in evaluating that nowadays :(
I tried it for my class, and the questions they come up with is boring, repetitive, and generic.
I feel very sorry for you that you need to endure that.


Amos Bar-Joseph, CEO of Swan AI, bragged about his Anthropic bill in a viral LinkedIn post, saying “We’re building the first autonomous business - scaling with intelligence, not headcount.”
Given how unintelligent this sentence is, maybe using LLM is indeed the more intelligent choice for them after all…


I still remember the good old days when google has the best code quality among big techs. That being said, seeing how shitty everyone’s code has become, google might still be the best :)


That is package right? Most people will not get the full benefit of the package, the cash is usually around 150k, plus the cashable part of the package it would be closer to 200k for most.


LLM is very good at programming when there are huge number of guardrails against them. For example, exploit testing is a great usecase because getting a shell is getting a shell.
They kind of acts as a smarter version of infinite monkey that can try and iterate much more efficiently than human does.
On the other hand, in tasks that requires creativity, architecture, and projects without guard rail, they tend to do a terrible job, and often yielding solution that is more convoluted than it needs to be or just plain old incorrect.
I find it is yet another replacement for “pure labor”, where the most unintelligent part of programming, i.e. writing the code, is automated away. While I will still write code from scratch when I am trying to learn, I likely will be able automate some code writing, if I know exactly how to implement it in my head, and I also have access to plenty of testing to gaurentee correctness.
11$ per hour is not an okay wage in the U.S.