Let's say for C-to-C, are you talking about swapping the head/tail? Or simply connecting at a different angle (180 degrees)?
It was not a cheap cable, it was a medium-priced one with good reviews from a known brand.
I picked it up to find it had shut itself off, and now won't accept any charge, wireless or wired from any combination of power sources and cables. No signs of life at all.
Is there any way to check this other than experiment?
My "solution" so far has been to not buy cheap cables and just hope I get quality in return.
Well sure, a standards-compliant cable will work in either orientation, but it's always possible for some but not all of the pins or wires to break.
Maybe the negotiation can fail & the plugged in orientation is then the only one that works?
The receptacles are symmetric, but a full connection is not. The cable only connects CC through end-to-end on one of A5 or B5, but not both, which lets the DFP detect which of A5 or B5 should be CC. The one not used for CC is then used to power the e-marker in the cable, if any.
This is also true for legacy adapters; for example, for C-to-A USB 3 adapters, the host needs to know which of the two high-speed pairs to use (as USB 3 over A/B connectors only supports one lane, while C supports two).
I always assumed that USB C cables use different pins depending on orientation, and that some pins on the cable wore down.
Maybe that's what happened here?
It would be nice to just compare with the device's reported maximum capability, but I'm not sure whether macOS exposes that in any API.
Hardware -> USB
I also use the app to check what wattage my cables are when charging my MacBook (Hardware -> Power)
I couldn’t find source (the link in the article points to a GitHub repo of a user’s home directory. I hope for them it doesn’t contain secrets), but on my system, system_profiler -json produces json output. From that text, it doesn’t seem they used that.
First Go source: https://github.com/kaushikgopal/dotfiles/blob/f0f158398b5e4d...
started out as a shell script but switched to a go binary (which is what is linked).
It's not like the performance of this could have motivated it
Performance isn't everything; readability and maintainability matter too.
Is that case for this vibe-coded thing? https://news.ycombinator.com/item?id=45513562
I'm just saying that I've seen several "small tools that could have been shell scripts" in Go or another more structured language and never wished they were shell scripts instead.
* https://github.com/kaushikgopal/dotfiles/blob/master/bin/usb...
Presumably there is a sensible way to do this in go by calling an API and getting the original machine-readable data rather than shelling out to run an entire sub-process for a command-line command and parsing its human-readable (even JSON) output. Especially as it turns out that the command-line command itself runs another command-line command in its turn. StackExchange hints at looking to see what API the reporter tool under /System/Library/SystemProfiler actually queries.
No, silly me. Shortly searched for a src directory, but of course, should have searched for a bin directory, as that’s where vibe coding stores sources /s.
There are plenty for Ethernet, but none such ones for USB. Was I looking with the wrong keywords or such device does not exist?
Note: I have a dongle that measures the power when inserted between the laptop and the charger, this is not what I am looking for
https://treedix.com/collections/best-seller/products/treedix...
That said, these things do seem to exist at this point, as sibling comments have pointed out.
As an aside, it's a real shame devices with USB-C ports don't offer this out of the box. They need to be able to read the marker anyway for regular operation!
https://fr.aliexpress.com/item/1005007509475055.html
Edit: This will test whether the cable is functioning properly. It will show the connections and indicate whether the cable supports only power or also data transfer. However, it won’t provide information about the USB-C cable type or its speed capabilities.
The later would require multi-thousands dollar machine.
For a "regular" USB C that supports USB 2.0 speeds (and is rated for 60W and therefore lacks an internal e-marker chip), there's just 5 wires inside: Two for data, two for power, and one for CC. There's nothing particularly complex about testing those wires for end-to-end continuity (like a cheapo network cable tester does).
A charging-only cable requires only 3 wires.
But fancier cables bring fancier functions. Do you want to test if the cable supports USB 3? With one lane, or two lanes? USB 4? Or what of the extra bits supporting alt modes like DisplayPort and MHL and the bag of chips that is Thunderbolt -- does that need all tested, too? (And no, that earlier 120Gbps figure isn't a lie.)
And power? We're able to put up to -- what -- 240W through some of these cables, right? That's a beefy bit of heat to dissipate, and those cables come with smarts inside of them that need negotiated with.
I agree that even at extremes, it's still somewhere within the realm of some appropriate FPGA bits or maybe a custom ASIC, careful circuit layout, a big resistor, and a power supply. And with enough clones from the clone factories beating eachother up on pricing, it might only cost small hundreds of dollars to buy.
So then what? You test the fancy USB-C ThunderBolt cable with the expensive tester, and pack it up for a trip for an important demo -- completely assured of its present performance. And when you get there, it doesn't work anyway.
But the demo must proceed.
So you find a backup cable somewhere (hopefully you thought to bring one yourself, because everyone around you is going to be confused about whatever it is that makes your "phone charger" such a unique and special snowflake that the ones they're trying to hand to you cannot ever be made to work), plug that backup in like anyone else would even if they'd never heard the term "cable tester," and carry on.
The tester, meanwhile? It's back at home, where it hasn't really done anything but cost money and provide some assurances that turned out to be false.
So the market is limited, the clone factories will thus never ramp up, and the tester no longer hypothetically costs only hundreds of dollars. It's right back up into the multiple-$k range like the pricing for other low-volume boutique test gear is.
(I still want one anyway, but I've got more practical things to spend money on...like a second cable to use for when the first one inevitably starts acting shitty.)
Is that 120 Gbps or 120 GB/s as the previous poster stated? 120 GB/s is on the order of DDR5 throughput - I doubt we have any kind of cheap cable tech that can carry that kind of bandwidth right now - while 120 Gbps is more like NVME 5 SSD speed.
I myself definitely meant gigabits per second. 120Gbps is about what 2x 8k 60Hz monitors use at a constant rate in the simplest sense (pixels), so that's what I assumed they were talking about.
But it's more complex than that.
Looking a bit deeper: It seems that TB5 (which uses USB C) is natively a symmetric 80Gbps: 80Gbps one way, and 80Gbps the other way. 160Gbps, total, counting both directions -- plus or minus overhead.
It can also use a "turbo" mode where things are shifted to be 120Gbps one way (host-to-device) and 40Gbps the other way (device-to-host), which is still 160Gbps in aggregate.
It isn't clear how that dual-8k-display mode provides any extra bandwidth for other things that may be downstream like storage devices, but that's as deep as I feel like going.
And that is deep enough to answer your question: It is definitely in the neighborhood of [up to] 120Gbps [in one direction], and it is definitely never in the neighborhood of 120GBps [neither in any direction, nor in aggregate].
Related: If you are looking for cables, this guy has tested a bunch (mainly for charging capabilities) https://www.allthingsoneplace.com/usb-cables-1
And some metrics on internal reflections.
480 vs. 5000 Mbps is a pernicious problem. It's very easy to plug in a USB drive and it looks like it works fine and is reasonable fast. Right until you try to copy a large file to it and are wondering why it is only copying 50MBytes/second.
It doesn't help that the world is awash in crappy charging A-to-C cables. I finally just throw me all away.
Couldn't figure out why my 5-disk USB enclosure was so ungodly slow. Quickly I saw that it was capping suspiciously close to some ~40MB/s constant, so 480Mbps.
lsusb -v confirmed. As it happened I did some maintenance and had to unplug the whole bay.
Since the port was nearly tucked against a wall I had to find the port by touch and insert somewhat slowly in steps (brush finger/cable tip to find port, insert tip at an angle, set straight, push in) but once in place it was easy to unplug and insert fast...
This was driving me "vanilla ice cream breaks car" nuts...
And if you hate this, you should probably never look into these (illegal by the spec, but practically apparently often functional) splitters that separate the USB 2 and 3 path of a USB 3 capable A port so that you can run two devices on them without a hub ;)
(which is inconvenient because USB 3.2 Gen 2x2 20 Gbps external SSD cases are much cheaper than USB 4 cases for now).
Also, he is calling a binary a script, which i find suspicious. This task looks like it should have been a script.
USB-IF in all their wisdom used "USB 3.2" to refer everything from 5 gbps (USB 3.2 Gen 1×1 ) to 20 gbps
> Two years ago, I wouldn’t have bothered with the rewrite, let alone creating the script in the first place. The friction was too high. Now, small utility scripts like this are almost free to build.
> That’s the real story. Not the script, but how AI changes the calculus of what’s worth our time.
But like the author, I've found that it's usually better to have the llm output python, go or rust than use bash. So I've often had to ask it to rewrite at the beginning. Now I just directly skip bash
That all the naysayers are missing the tons of small wins that are happening every single day by people using AI to write code, that weren't possible before.
I specified in a thread a few weeks ago that we manage a small elixir-rust library, and I have never coded rust in my life. Sure, it's about 20 lines of rust, mostly mapping to the underlying rust lib, but so far I've used claude to just maintain it (fix deprecations, perform upgrades, etc).
This simply wasn't possible before.
Great for identifying not just bad cables, but also data rates.
https://www.kickstarter.com/projects/electr/ble-caberqu-a-di...
> That’s the real story. Not the script, but how AI changes the calculus of what’s worth our time.
Looking at the github source code, I can instantly tell. It's also full of gotchas.
At the same time though I'm at a point in my career where I'm cynical and thinking it really doesn't matter because whatever I build today will be gone in 5-10 years anyway (front-end mainly).
There are plenty of non critical aspects that can be drastically accelerated, but also plenty of places where I know I don't want to use today's models to do the work.
Ingesting json files into sqlite should only take half a day if you're doing it in C or Fortran for some reason (maybe there is a good reason). In a high level language or shouldn't take much more than 10 minutes in most cases, I would think?
It depends on how complex the models are, because now you need to parse your model before inserting them. Which means you need tables to be in the right format. And then you need your loops, for each file you might have to insert anywhere between 5 to nested 20 entities. And then you either have to use an ORM or write each SQL queries.
All of which I could do obviously, and isn't rocket science, just time consuming.
As far as I can tell skimming the code, and as I said, without knowledge of Go or the domain, the "shape" of the code isn't bad. If I got any vibes (:))from it, it was lack of error handling and over reliance on exactly matching strings. Generally speaking, it looks quite fragile.
FWIW I don't think the conclusion is wrong. With limited knowledge he managed to build a useful program for himself to solve a problem he had. Without AI tools that wouldn't have happened.
On the whole, it might work for now, but it'll need recompiling for new devices, and is a mess to maintain if any of the structure of the data changes.
If a junior in my team asked me to review this, they'd be starting again; if anyone above junior PRd it, they'd be fired.
I have a usb to sata plugged in and it's labeled as [Problem].
I understood their statement to be that the device didn't correlate outgoing power vs incoming power.
So it would indicate that it is charging, because power is coming in - but not tell you that a similar or more power is leaving.
I do find it frustrating in a world of battery management that we have indicators for charging and quick/fast/rapid charging. But near nothing for discharge or comparisons between incoming vs outgoing.
This aligns with the hypothesis that we should see and lots lots of "personalized" or single purpose software if vibe coding works. This particular project is one example. Are there a ton more out there?
I always wanted a dedicated binary anyway, so 1 hour later I got: https://github.com/emilburzo/pushbulleter (10 minutes vibe coding with Claude, 50 minutes reviewing code/small changes, adding CI and so on). And that's just one where I put in the effort of making it open source, as others might benefit, nevermind the many small scripts/tools that I needed just for myself.
So I share the author's sentiments, before I would have considered the "startup cost" too high in an ever busy day to even attempt it. Now after 80% of what I wanted was done for me, the fine tuning didn't feel like much effort.
So yes there is a ton but why bother publishing and maintaining them now that anyone can produce them? Your project is not special or worthwhile anymore.
These are all things I could do myself but the trade off typically is not worth it. I would spend too much time learning details and messing about getting it to work smoothly. Now it is just a prompt or two away.
Last weekend I had a free hour and built two things while sat in a cafe:
- https://yourpolice.events, that creates a nice automated ICS feed for upcoming events from your local policing team.
- https://github.com/AndreasThinks/obsidian-timed-posts, an Obsidian plugin for "timed posts" (finish it in X minutes or it auto-deletes itself)
This is currently the vibe on consulting, possible ways to reduce headcount, pun intended.
Just an hour ago I "made" one in 2 minutes to iterate through some files, extract metadata, and convert to CSV.
I'm convinced that hypothesis is true. The activation energy (with a subscription to one of the big 3, in the current pre-enshittification phase) is approximately 0.
Edit: I also wouldn't even want to publish these one-off, AI-generated scripts, because for one they're for specific niches, and for two they're AI generated so, even though they fulfilled their purpose, I don't really stand behind them.
Okay but lots of us have been crapping out one off python scripts for processing things for decades. It's literally one of the main ways people learned python in the 2000s
What "activation energy" was there before? Open a text file, write a couple lines, run.
Sometimes I do it just from the interactive shell!
Like, it's not even worth it to prompt an AI for these things, because it's quicker to just do it.
A significant amount of my workflow right now is a python script that takes a CSV, pumps it into a JSON document, and hits a couple endpoints with it, and graphs some stats.
All the non-specific stuff the AI could possibly help with are single lines or function calls.
The hardest part was teasing out python's awful semantics around some typing stuff. Why is python unwilling to parse an int out of "2.7" I don't know, but I wouldn't even had known to prompt an AI for that requirement, so no way it could have gotten that right.
It's like ten minutes to build a tool like this even without AI. Why weren't you before? Most scientists I know build these kind of microscripts all the time.
Example: I rebuilt my homelab in a weekend last week with claude.
Setup terraform / ansible / docker for everything, and this was possible because I let claude all the arguments / details. I used to not bothered because I thought it was tedious.
adding to the theory that soon we gonna prefer to write, rather download ready-made code, because the friction is super low
Windows: There's an example in the WDK here: https://github.com/Microsoft/Windows-driver-samples/tree/mai...
Huh? Is this true? I know Go makes cross-compiling trivial - I've tried it in the past, it's totally painless - but is it also able to make a "cross platform binary" (singular)?
How would that work? Some kind of magic bytes combined with a wrapper file with binaries for multiple architectures?
I wonder how much writing these scripts cost. Were they done in Claude's free tier, pro, or higher? How much of their allotted usage did it require?
I wish more people would include the resources needed for these tasks. It would really help evaluate where the industry is in terms of accessibility. How much is it reserved for those with sufficient money and how that scales.
So it’s not a built-in command as the titles eluded
It will take some time (maybe more than a decade) for vibe coding to be "old" and consistently correct enough where it's no longer mentioned.
Same thing happened 30 years ago with "The Information Superhighway" or "the internet". Back then, people really did say things like, "I got tomorrow's weather forecast of rain from the internet."
Why would they even need to mention the "internet" at all?!? Because it was the new thing back then and the speaker was making a point that they didn't get the weather info from the newspaper or tv news. It took some time for everybody to just say, "it's going to rain tomorrow" with no mentions of internet or smartphones.
But in general you are right. The article was for developers so mentioning the tool/language/etc. is relevant.
> Two years ago, I wouldn’t have bothered with the rewrite, let alone creating the script in the first place. The friction was too high. Now, small utility scripts like this are almost free to build.
> That’s the real story. Not the script, but how AI changes the calculus of what’s worth our time.
I wouldn't trust this as source code until after a careful audit. No way I'm going to trust a vibe-coded executable.
Of course I don't have any problems with the author writing the tool, because everyone should write what the heck they want and how they want it. But seeing it gets popular tells me that people have no idea what's going on.
I think you have a good point about why people say it was vibe coded.
It might also be because they want to join the trend -- without mentioning vibe coding, I don't think this tool would ever reach #1 on Hacker News.
Do you care about your binary code inside your application, or what exactly happen, in silicon level, when you write "printf("Hello World")" ?
I verify dynamic linking, ensure no superfluous dylibs are required. I verify ABI requirements and ensure a specific version of glibc is needed to run the executable. I double-check if the functions I care about are inlined. I consider if I use stable public or unstable private API.
But I don't mean that the author doesn't know what's going on in his snippet of code. I'm sure he knows what's going on there.
I mean that upvoters have no idea what's going on, by boosting vibe coding. People who upvote this are the reason of global software quality decline in near future.
It's all abstraction, we all need to not know some low level layer to do our job, so please stop gatekeeping it.
That we shouldn't care about spending $1 for a sandwich therefore managing home budget is pointless?
Different people will care different layers.
https://en.wikipedia.org/wiki/Leaky_abstraction
Not caring about lower details in the expert domain is simply carelessness. We also need to consider how the abstraction layers are merged and what is the outcome. Abstraction layers are a tool, they are not the immutable environment we are operating in.
Clients can have the luxury of not knowing what is in the details, but not programmers.
I don't know if that necessarily helps though, because I've seen USB3 cables that seemingly have the bandwidth and power capabilities, but won't do video.
Plus, it doesn't really matter if you put "e-marker 100W USB3 2x2 20gbps" on a cable when half those features depend on compatibility from both sides of the plug (notably, supposedly high-end devices not supporting 2x2 mode or DP Alt mode or charging/drawing more than 60W of power).
And when they upped the max voltage they didn't do it for preexisting cables, no matter what the design was.
> those features depend on compatibility from both sides of the plug
That's easy to understand. Cable supports (or doesn't support) device, it can't give new abilities to the device. It doesn't make labeling less valuable.
if I got a hold of the output and commands run, would gladly modify it.
On Linux that produces a lot of info similar to the macos screenshots, but with values and labels specific to the Linux USB stack.
I wonder if AI could map the linux lsusb output to a form your tool can use...