I like AI on the producing side. Not so much on the consuming side.
I don't want you to send me a AI-generated summary of anything, but if I initiated it looking for answers, then it's much more helpful.
- I'm reviewing the last meeting of a regular meeting cadence to see what we need to discuss.
- I put it in a lookup (vector store, whatever) so I can do things like "what was the thing customer xyz said they needed to integrate against".
Those are pretty useful. But I don't usually read the whole meeting notes.
I think this is probably more broadly true too. AI can generate far more text than we can process, and text treatises on what an AI was prompted to say is pretty useless. But generating text not with the purpose of presenting it to the user but as a cold store of information that can be paired with good retrieval can be pretty useful.
I think this is about when the app is broken and people are keeping a meeting app open to communicate with each other as they scramble to fix things.
So the limitation here is more about problems not being solved yet rather than how a 'meeting' is organized.
But to be blunt / irreverent, it's the same with Git commit messages or technical documentation; nobody reads them unless they need them, and only the bits that are important to them at that point in time.
You know what really, really, helps while doing code review? Good commit messages, and more generally, good commit practices so that each commit is describing a set of changes which make sense together. If you have that then code review becomes much easier, you just step through each commit in turn and you can see how the code got to be where it is now, rather than Github's default "here's everything, good luck" view.
The other thing that helps? Technical documentation that describes why things are as they are, and what we're trying to achieve with a piece of work.
Unrelated, but I don't know why I expected the website and editor theme to be hay-yellow and or hay-yellow and black instead of the classic purple on black :)
Yeah originally I thought of using yellow/brown or yellow/black but for some reason I didn't like the color. Plenty of time to go back though!
Could you expound on this? In my experience as a software engineer, a pull request could fall into one of two buckets (assuming it's not trivial):
1. The PR is not organized by the author so it's skimmed and not fully understood because it's so hard to follow along
2. The PR author puts a lot of time into organizing the pull request (crafting each commit, trying to build a narrative, etc.) and the review is thorough, but still not easy
I think organization helps the 1st case and obviates the need for the author to spend so much time crafting the PR in the 2nd case (and eliminates messy updates that need to be carefully slotted in).
Curious to hear how y'all handle pull requests!
I agree with this wholeheartedly if you are in a role that allows you to redefine what a PR is. In almost every organization that I've worked for, the PR is defined several levels above my pay grade and suggesting changes/updates/etc is usually seen as complaining.
> companies don't want to invest in slowing down, only going faster.
I do think this is the way things are going to go moving forward, for better or for worse!
As for other people's PRs? If they don't give a good summary, I ask them to write one.
Exactly; if people can't be bothered to describe (and justify) their work, or if they outsource it to AI that creates something overly wordy and possibly wrong, why should I be bothered to review it?
1. Allow me to step through the code execution paths that have been modified in the pull request, based on the tests that have been modified.
2. Allow me to see the data being handled in variables as I look through the code.
3. Allow me to see code coverage of each part of the code.
4. Show me the full file as I am navigating through the program execution so that I can feel the level of abstraction and notice nearby repetition or code that would benefit from being cleaned up.
Full article: https://dtrejo.com/code-reviews-sad
Not sure if I fully grasp this! We tried to kind of do this in previous iterations (show call graphs all at once) and it gets messy very fast. Could you elaborate on this point in particular?
Starting from the test, allow me to step through the program execution, just like a debugger, to observe variables, surrounding code, and the complete file.
If you read only the covered lines of code in a linear way, you'd miss the refactoring opportunities because you aren't looking at the rest of the file.
Feedback: Try speeding up your demo animations and resize the mouse to its regular size. My estimate is that if the marketing copy explains what a thing is, what it does and why it’s useful then all a visitor wants to see in an image is things go pop, boom and whoosh.
Code reviews have always been about primarily reviewing others people code.
Abd knowing your won code better than others people code is a real thing?
Not sure if I fully grasp what you mean by dog whistling, but at the end of the day, like another commenter said, Haystack is also pretty helpful for when you're done experimenting with a piece of work and need to see what an AI has generated.
Failed to load resource: net::ERR_BLOCKED_BY_CLIENT ^ I'm not exactly sure what this is about. I think it is https://static.cloudflareinsights.com/beacon.min.js/vcd15cbe... which I would imagine is probably not necessary.
Uncaught TypeError: Cannot convert undefined or null to object at Object.keys (<anonymous>) at review/?pr_identifier=xxx/xxx/1974:43:12
These urls seem to be kind of revealing.
In terms of auth: you should get an "unauthenticated" if you're looking at a repo without authentication (or a non-existent repo).
If you install and subscribe to the product, we create a link for you every time you make a pull request. We're working (literally right now!) on making it create a link every time you're assigned a review as well.
We'll also speed up the time in the future (it's pretty slow)!
Just FYI.
There's just so much contextual data outside of the code itself that you miss out on. This looks like an improvement over Github Co-Pilot generated summaries, but that's not hard.
I hope you (eventually) ship something for AR to visualize software components interacting in 3D space
If AI writes all of the code, we will need to max out humans’ ability to do architecture and systems design.
Strongly agree with this. There are demos that you're able to try, by the way!
Are you the same folks that worked on that?
What is your privacy policy around AI?
Any plans for a locally-runnable version of this?
It would be good to have this mentioned on the website somewhere, as part of a privacy policy. Right now I can't find any details on the site, which is preventing me from trying this out with a production repository.
Or do you mean that doing the browser navigation of "back" should bring you to the summary (initial page)?