For my uses it's great that it has both test suite mode and individual invocation mode. I use it to execute a test suite of HTTP requests against a service in CI.
I'm not a super big fan of the configuration language, the blocks are not intuitive and I found some lacking in the documentation assertions that are supported.
Overall the tool has been great, and has been extremely valuable.
I started using interface testing when working on POCs. I found this helps with LLM-assisted development. Tests are written to directly exercise the HTTP methods, it allows for fluidity and evolution of the implementations as the project is evolving.
I also found the separation of testing very helpful, and it further enforces the separation between interface and implementation. Before hurl, the tests I wrote would be written in the test framework of the language the service is written in. The hurl-based tests really help to enforce the "client" perspective. There is no backdoor data access or anything, just strict separation betwen interface, tests and implementation :)
I'm really interested by issues with the documentation: it can always be improved and any issues is welcome!
We had a test suite using Runscope, I hated that changes weren't versioned controlled. Took a little grunt work and I converted them in Hurl (where were you AI?) and got rid of Runscope.
Now we can see who made what change when and why. It's great.
Loved Runscope it served it's purpose until something came along that that offered the same + version control.
Those basically go in the form
POST http://localhost:8080/api/foo
Content-Type: application/json
{ "some": "body" }
And then we have a 1-to-1 mapping of "expected.json" outputs for integration tests.We use a bespoke bash script to run these .http file with cURL, and then compare the outputs with jq, log success/failure to console, and write "actual.json"
Can I use HURL in a similar way? Essentially an IDE-runnable example HTTP request that references a JSON file as the expected output?
And then run HURL over a directory of these files?
If that's possible, I guess the only thing I'd request is interopability with the REST Client ".http" files that VS Code/JetBrains IDE's support then.
UPDATE: Found it, looks like you can do it via the below
POST https://example.org/api/tests
Content-Type: application/json
file,insert_user.request.json;
[Asserts]
body == file,insert_user.expected.json;
So that just leaves the IDE integration bit.Is your expected.json the actual response body, or is it an object containing body, status, header values, and time-taken, etc?
I really like it because it serves 3 purposes:
- API docs/examples that you can interact with
- Test cases
- Manually invoking API endpoints when working on the underlying code, in an iterative loop
Where do you see hurl in the next 2 years?
A favorite of mine is to be available through official `apt`: there has been some work but it's kind of stuck. The Debian integration is the more difficult integration we have to deal. It's not Debian fault, there are a lot of documentation but we've struggled a lot and fail to understand the process.
If I find time I could throw a spec file + ci/cd workflow to get you going that way too
Worth mentioning that using Hurl in Rust specifically gives you a nice bonus feature: integration with cargo test tooling. Since Hurl is written in Rust, you can hook into hurl-the-library and reuse your .hurl files directly in your test suite. Demo: https://github.com/perrygeo/axum-hurl-test
[1] https://github.com/Orange-OpenSource/hurl?tab=readme-ov-file...
https://marketplace.visualstudio.com/items?itemName=humao.re...
Which is a banger VS Code extension for all sorts of http xyz testing.
(After I have seen the IntelliJ one from a colleague I was searching for one like that in neovim. That's the best one I found. It's not perfect, but it works.
Edit: The tool from OP looks very neat though. I will try it out. Might be a handy thing for a few prepared tests that I run frequently
It is targeted toward more postman crowd though. May not be as lightweight.
Rest Client has a few cons though, like request chaining.
I was using Rest Client and was very happy with it, but once I needed Rest Client to use my computer's NO_PROXY env variable to avoid using the proxy for a certain url, and I found it was not possible to do that with Rest Client. That's the only reason I had to look for an alternative tool. After an analysis, I liked Bruno and Hurl. I didn't try hurl yet.
There is probably something to be said for keeping a hard boundary between the backend and testing code, but this would require more effort to create and maintain. I would still need to run the native test suite, so reaching out to an external tool feels a little weird. Unless it was just to ensure an API was fully generic enough for people to run their own clients against it.
I don't use hurl but I've used other tools to write language agnostic API tests (and I'm currently working on a new one) so here's what I like about these kinds of tests:
- they're blind to the implementation, and that's actually a pro in my opinion. It makes sure you don't rely on internals, you just get the input and the output
- they can easily serve as documentation because they're language agnostic and relatively easy to share. They're great for sharing between teams in addition to or instead of an OpenAPI spec
- they actually test a contract, and can be reused in case of a migration. I've worked on a huge migration of a public API from Perl to Go and we wanted to keep relatively the same contracts (since the API was public). So we wrote tests for the existing Perl API as a non-regression harness, and could keep the exact same tests for the Go API since they were independent from the language. Keeping the same tests gave us greater confidence than if we had to rewrite tests and it was easy to add more during the double-run/A-B test period
- as a developer, writing these forces you to switch context and become a consumer of the API you just wrote, I've found it easier to write good quality tests with this method
Another benefit is we built a Docker image for production and wanted to have something light and not tight to the implementation for integration tests.
For my team needs, I see benefits of using self-contained tool which doesn't require any extra modules to be installed and venv-like activated (this is a great barrier when ensuring others can use it too). Not even mention it will run fast.
Testing headers is particularly nice, so can test configuration of webservers and LBs/CDNs.
One annoying thing I've found in testing these tools is that a standard hasn't emerged for using the results of one request as input for another in the syntax of `.http` files. These three tools for instance have three different ways of doing it:
* hurl uses `[Captures]`
* Vscode-restclient does it by referencing request names in a variable declaration (like: `@token = {{loginAPI.response.body.token}}`).
* While httpyac uses `@ref` syntax.
From a quick round of testing it seems like using the syntax for one might break the other tools.
[1]: https://hurl.dev/docs/capturing-response.html
[2]: https://github.com/Huachao/vscode-restclient
[3]: https://httpyac.github.io/guide/metaData.html#ref-and-forcer...
I don't know what the mechanism/incentive for getting a standard would be either. Probably most likely would be if there was one clear "winner" that everyone else felt the need mirror.
In any case, appreciate the reply and the tool. Good luck with it.
Conway's Law in action, ladies and gentlemen.
It gives you full control of constructing requests and assertions because test scenarios may include arbitrary JavaScript.
https://blog.jetbrains.com/idea/2022/12/http-client-cli-run-...
Kinda niche, but I wrapped libhurl to make it really easy to make an AWS Lambda availability monitor out of a hurl file https://gitlab.com/manithree/hurl_lambda
https://github.com/Orange-OpenSource/hurl/blob/master/Cargo....
TIL! The way I knew to do it was to have a mock implementation that behaved like the real thing, expect for data/time/uuids/..., where there was just a placeholder. Snapshot tests being able to "mask" those non-deterministic parts sounds cool!
GET http://foo.com
HTTP 200
You could write also GET http://foo.com
HTTP *
[Asserts]
status == 200
HTTP serves as a marker of the response sectionIt would be nice to have fancy-regex; today I tried to write a regex to match a case like this ~ <link href="/assets/reset.css\\?hash=(.*)" integrity="\\1" rel="stylesheet"> ~ but the regex crate (and thus hurl asserts) can't do backreferences so I guess I'll just live without checking that these two substrings match.
I wish there was some way to test streamed updates / SSE. Basically, open a connection and wait, then run some other http requests, then assert the accumulated stream from the original connection. https://github.com/Orange-OpenSource/hurl/discussions/2636
The deficiencies in huel with client state management is not easy to fix.
What I'd like is full client state control with better variable management and use.
For my last project I used Python to write the tests, which appears to work well initially. Dunno how well it will hold up for ongoing maintenance.
We used it very often a couple of years ago. Will try hurl.
I don't really feel the need for a curl replacement. In the past I've used httpie which is pretty slick but I end up falling back to writing tests in python using requests library.
Maybe I'm not the target audience here, but I should still say something nice I guess. It's nice that it's written in Rust, and open source tooling is in need of fresh projects ever since everyone started bunkering up against the AI monolith scraping all their work. We should celebrate this kind of project, I just wish I had a use for it.
Regarding curl, Hurl is just adding some syntax to pass data from request to request and add assert to responses. For a one time send & forget request, curl is the way, but if you've a kind of workflow (like accessing an authentified resource) Hurl is worth a try. Hurl uses libcurl under the hood and you've an option `--curl` to get a list of curl commands.
Is there a different POST request in the readme or are you saying that this example is going to send the "user" and "password" params in the request body?
> POST https://example.org/login?user=toto&password=1234
That seems really surprising to me - how would you then send a POST request that includes query string parameters? The documentation on form parameters [1] suggests there's an explicit syntax for sending form-encoded request parameters
POST https://acmecorp.net/login?user=toto&password=1234
In the README is doing a POST request with user and paasword parameter in the URL. POST https://acmecorp.net/login
[Form]
user: toto
password: 1234
Is a more traditional POST with user and password in the body. Probably going to update the READMEs sample Issue created here [1]!What about test isolation? Are people using something else to "prime" the service before/after running these tests?
You make GET request to server with any of supported crawlers and obtain result in JSON
https://github.com/rumca-js/crawler-buddy/
Supports request, selenium, Httpx, curl cffi, etc
I don’t think the DSL is significantly easier than a PL and it’s more limited to?
Is it because of raw speed or ease of reading the DSL?
Sounds a lot like Emacs' restclient-mode, and I can absolutely see the appeal for those which don't already have an Emacs session open.
What's great is that all my experimentation is "reproducible", and all of it is in my notes. Notes that are easily searchable, exportable (e.g. for a blog content), etc.
I can easily add links to some relevant PDFs, and youtube vids right there. I can start annotating those pdfs, where my notes will be interwoven with my API research. I can watch videos while controlling playback from Emacs, without having to switch to the app - it's very nice when taking notes. I can retrieve the transcript, send it to LLM, get the summary of the video and add it to my notes.
If I come up with something that I'd like to memorize better, I can easily export chunks as Anki cards. That information is also within my notes. I can easily find it, edit it, etc. I don't need to navigate multiple different apps to get this work done.
I do use Org mode, but only for simple personal notes. When I get home I'll explore using it as a REST client. They always told me that Emacs is a great OS, that just lacks a decent text editor.
I love Emacs, but honestly Org-mode is such a treasure, a gem that the only regret I have is not discovering it sooner. It is a truly fantastic tool. I manage my entire life in it - I have my work notes and personal journal in it - I use Org-Roam. All my LLM chats are in Org-mode. My research and learning materials, my flashcards. I use it for pomodoro. I manage my dotfiles with Org-mode - it makes my entire system "immutable", I don't have to manipulate files individually - I do it from one place. Shit, I'm even reading your comment right now in Org-mode outline format¹. The value of plain text is absolutely underestimated. Once you see it, it's really difficult to give it up. And you'd ask yourself "give it up in exchange for what?" So you can keep searching for a "better and shinier" tool whenever you need to perform a fartworthy piece of task? A tool that has its own set of rules, and doens't even let you rebind keys or change colors?
___
¹ https://www.reddit.com/r/emacs/comments/1hbi751/passing_data...
“Get data from the last log entry in <file> and post it to <url>”
[Captures]
csrf_token: xpath "normalize-space(//meta[@name='_csrf_token']/@content)"
The use the name with mustaches {{csrf_token}}
- https://hurl.dev/docs/capturing-response.htmlpremière fois que je vois qqch de cool sortir d'orange.