So many of these features were adopted after they were proven in other languages. You would expect that since Java took such a slow and conservative approach, it would end up with extremely polished and elegant designs, but things like streams ended up inferior to previous developments instead of being the culmination. Really disappointing. Java is now a Frankenstein's monster with exactly as much beauty and charm.
Yes, it’s weird how that’s still Java, but using standard components and only using code as glue where it’s absolutely necessary seems very similar to other engineering disciplines to me.
You can always inject your own implementation if needed right?
If you prefer GUI, Intellij even has a Spring Debugger: https://www.jetbrains.com/help/idea/spring-debugger.html
I much prefer Spring's XML configuration from the old days. Yeah, XML sucks and all that. But still, with XML, the configuration is completely external from the application and I can manage it from /etc style layouts. Hard coding and compiling in dependency injection via annotations or other such behaviors into the class directly has caused me grief over the long term pretty much every time.
How about varying implementations of a service interface. Let's say I have a Scheduler interface and I want to have multiple implementations; maybe one is CronScheduler, another is RandomScheduler, another is BlueMoonScheduler. Each of these schedulers have their own properties and configuration values. I might want to choose, per environment or deployment context, which service implementation to use.
Annotation configuration makes this (near?) impossible to dynamically wire and configure these scenarios and make them tailored to the environment or deployment scenario. Annotations are generally "static" and do not follow a configuration-as-code approach to application deployment.
An external configuration file, as offered by Spring's original (less "favored") XML style, allowed a more composable and sophisticated configuration-as-code approach. Maybe that's a negative, putting that much power into the hands of the service administrator. But as I stated originally, in my experience, having statically defined annotation driven configuration and/or dependency injection has caused more problems than it has solved.
Off the top of my head, you could drop a `@Profile` or `@ConditionalOnProperty` annotation on each of your different schedulers and pick them at launch time simply by adding the profile you want to the arguments or the environment. That assumes you want to choose one for the whole app. If you want to have different ones in different locations, you can dynamically load beans in code. Or if you want them loaded entirely with annotations, you could define differently named beans for each context, and include `@Qualifier`s on them and in the locations they're being used.
Which isn't to say that annotations are perfect, but dynamic runtime configuration is sort of core to how spring operates, and annotations are no exception to that.
all of your scenarios are trivial to implement with annotations
And then if you want to change a value at runtime you have to restart the executable?
Would be nicer if we could handle creds like it wasn't 1992, but this does the job too.
Never really came across with any other real cases where it solves a pressing issue as you mention. Most times is far more convenient to do things outside the compiled code.
It's kind of like, when annotations were delivered to Java, lots of projects thought they were just the next greatest thing. I followed right along at the time, as well.
But over time, finding the value in the configuration-as-code approach to application deployment, I definitely feel that annotations have been abused for many use cases.
cronService.schedule("xxx", this::refresh);
This isn't any harder than annotation. But you can ctrl+click on schedule implementation and below easily. You can put breakpoint and whatnot.and what exactly is “cronService”? you write in each service or copy/paste each time you need it?
My goodness. What a question!
The problem isn't that I don't know how to use a batteries included framework. The problem is that you guys don't know there is even an option to reuse your code by writing libraries.
Please do not project things like "you guys don't even know..". I'm one of "you guys" , and have built production code in a variety of languages and frameworks. So this "you guys" knows exactly what he/she is talking about.
I am not. I am literally saying the exact opposite.
I don't even understand what the source of confusion is. I literally said exactly the same thing in the comment you first you replied to.
If this is the level of incompetence encouraged by a framework, I would avoid using it just to avoid the chance of hiring people like you.
Just kidding. Spring boot is great. But yeah, I would fire people with this attitude without blinking an eye.
how do you decide whether you will write your own or pull in a dependency? this is a legit question. you did start this with writing your own “cronService” (which is about as insane as writing your own database driver) so asked about it.
I really did not. I only said that if you were to create your own cronService, you can reuse it by creating your library rather than copy pasting code (which is obviously insane).
> which is about as insane as writing your own database driver
No, it is not. Spring Boot’s support for async jobs and scheduled jobs is lacking. A lot of people roll their own. Including yours truly.
It is also much easier than writing a database driver so there is that.
Can you elaborate? What exactly is lacking and what version of Spring are you using?!
I am on the latest version of Spring Boot.
I'd give annotations 9/10 at least.
(And I lost the interest in the rest of the article, given such a level of familiarity with the subject matter.)
I understand the benefits of dependency injection, but to be totally honest I'm more likely to take the Go-style approach of wiring it all up manually, even if it's a bit of extra boilerplate. The indirection and abstractions built up in DI frameworks is rarely worth it IMO.
Harder than spring, but less magic than spring
It's perfectly fine to never have touched Spring. What surprised me is not acknowledging that not only are annotations used to do clerical things like @Override or @Deprecated, and not only to do some weird wiring with @Injected or @RequestBody, but allow to add large custom steps in transforming the code. Annotation processors are a huge comptime interface that can do, and routinely does, wild things, unimaginable in Go, the kind of code transformations you would expect in Lisp or Python.
I suspect the latter should have interesting security implications, too.
Anyway, I just wanted to say, that it's totally pointless to state something like "I know it well"... Say what's your problem with it, "I don't like it" doesn't add anything to the conversation. I'm quite sure whatever you would say as problems, most people would agree, maybe there would be even tips under it how to prevent it. That will never happen with the kind of comments which you made above.
All the big magic annotations are for Enterprise.
Okay, I've occasionally done a couple spring boot rest, which was ... Fine ... As long as you didn't have to do anything even remotely and complicated, but it keeps you in this weird box of middle performance.
If you've ever been on any large Enterprise spring Java project, you know what the future of vibe coded enterprise is bringing.
For example they used checked exceptions. Those definitely do not seem like proven feature. C++ has unchecked exceptions. Almost every other popular language has unchecked exceptions. Java went with checked exceptions and nowadays they are almost universally ignored by developers. I'd say that's a total failure.
Streams another good example. Making functional API for collections is pretty trivial. But they decided to design streams for some kind of very easy parallelisation. This led to extremely complicated implementation, absurdly complicated. And I've yet to encounter a single use-case for this feature. So for very rare feature they complicated the design immensely.
Modules... LoL.
We will see how green threads will work. Most languages adopt much simpler async/await approach. Very few languages implement green threads.
Those are from java 1.0 and thus don't appear to be relevant to the part of the discussion I think this part of the thread is about (namely: "Why doesn't java crib well designed features from other languages?").
> Java went with checked exceptions and nowadays they are almost universally ignored by developers.
They aren't.
Note that other languages invented for example 'Either' which is a different take on the same principle, namely: Explicit mention of all somewhat expectable alternative exit conditions + enforcing callers to deal with them, though also offering a relatively easy way to just throw that responsibility up the call chain.
The general tenet (lets lift plausible alternate exit conditions into the type system) is being done left and right.
The problem with Java is that they haven’t added the syntax to make dealing with those errors easy. It’s boiler plate hell.
That’s not the only issue, though: Java also shunts both checked and unchecked exceptions through the same mechanism, conflating them. It’s no wonder that Java’s problematic implementation of checked exceptions has poisoned people against the concept.
Java could do something similar but they have enough promise types already.
Checked exceptions are an awesome feature that more languages should have. Just like static typing is a good thing because it prevents errors, checked exceptions are a good thing because they prevent errors.
Java's implementation of checked exceptions has some issues, though.
* "Invisible control flow", where you can't tell from the call site whether or not a call might throw (you need to check the signature, which is off in some other file, or perhaps visible in an IDE if you hover).
* Java has both checked and unchecked exceptions, but they go through the same try-catch mechanism, failing to make a clean distinction between recoverable errors and unrecoverable bugs. (In e.g. Rust and Go, recoverable errors go through return values but unrecoverable errors go through panics.)
In the end, Java's exception design simultaneously requires a lot of effort to comply with, but makes it difficult to understand when you've successfully locked things down.
> failing to make a clean distinction between recoverable errors and unrecoverable bugs
Recoverability is context specific. One persons panic may just be another error case for someone else. I think this is one thing that programmers miss when talking about this topic. It is really up to the caller of your function if something should panic. You can’t make that decision for them.
> One persons panic may just be another error case for someone else.
We can make a strong distinction between recoverable errors which the programmer anticipated (e.g. this I/O operation may fail) versus unrecoverable errors resulting from unanticipated bugs which may leave the process in an unsound state, such as divide-by-zero or out-of-bounds array access[1].
There are some problem domains where even unrecoverable errors are not allowable, and programmers in those domains have to grapple with the panic mechanism.
But for the rest of us, it is useful to be able to distinguish between recoverable and unrecoverable errors — and to know how we have handled all possible sites which could result in recoverable errors.
[1] Joe Duffy explains it well: https://joeduffyblog.com/2016/02/07/the-error-model/#bugs-ar...
Never found this this to be a problem. It is really common to all implementations of exceptions, not just checked ones. And when you write code the compiler will yell at you. In monadic code,
People are used to that, and one common strategy is to not worry too much about handling individual exceptions but to instead wrap a big `try` block around everything near the outer boundary of your code. It’s good enough for many purposes and yields a high initial development velocity, but is comparatively fragile.
With Languages like Rust, Go, and Swift, only unrecoverable errors trigger the panic mechanism. Every call site where a recoverable error may occur is identifiable — in Rust via Result, `unwrap()`, the `?` operator, etc, in Go via returned Err (though unlike Rust you can discard them silently), and in Swift via the `try` operator.
You can still develop quickly by just unwrapping every Result, but unlike languages with invisible control flow, you can easily audit the codebase and go back to harden every site where a recoverable error may occur — yielding a level of robustness which is difficult to achieve in languages with unchecked exceptions.
A lot of code that throws checked exceptions is simply dangerous to use with Java streams because the execution order of stream operation is not obvious and possibly non-deterministic. For this reason, streams were never intended to also handle errors. Reactive frameworks are much better at that.
The UncheckedIOException is for situations where you really cannot throw a checked exceptions, such as inside an iterator. Which might lead to ugly surprises for the API user;
From the type system PoV, they could have just written something like `interface Runnable<X> { void run() throws X; }` and now `forEach` would have been written like `<X> void forEach(Runnable<X> r) throws X`. And repeat that for all stream operations, promoting `X` everywhere, so if your mapper function throws SQLException, the whole pipeline will throw it.
It even works today with some limitation (there's no way for `X` to get value of `SQLException|IOException` union type), but with some generic improvements it could work.
But they decided to not touch this issue at all. So now people will fool compiler with their tricks to throw checked exceptions like unchecked (or just wrap them with UncheckedXxxException everywhere, I prefer that approach).
Eg, mutable state capture in lambdas is largely restricted because of the thought that people would use parallel threads within the stream processing blocks. That decision lead to lots of ugly code, IMO.
I've also never seen a need to try to parallelize a simple stream processing step.
Modules absolutely achieved their primary goal: stopping libraries from accessing JDK internals without the application's knowledge. The ecosystem is slow on the uptake since split packages and access to internal APIs is endemic, but it is happening ever so slowly. I wish libraries could opt into not being part of the unnamed module.
Virtual threads were designed with explicit cooperation of the community, with the explicit goal of making it easy to switch as much existing code over to it as possible. I really don't understand the scepticism there. Most other languages went with promises or reactive streams because they were inspired by how functional programming languages do it.
The same way, caller can know which exceptions are really likely to happen in TypeScript, C++, Python. Or in modern Java which avoids checked exceptions anyway. By reading documentation or source code. That's perfectly fine and works for everyone.
And you highlighted one big issue with checked exceptions. You've claimed that those exceptions are "really likely to happen".
When I'm writing reading data from resource stream, the data that's located next to my class files, IO Exceptions are really unlikely to happen.
Another ridiculous example of this checked exception madness:
var inputStream = new ByteArrayInputStream(bytes);
var outputStream = new ByteArrayOutputStream();
inputStream.transferTo(outputStream); // throws IOException? wtf???
This code can throw OutOfMemoryError, StackoverflowError, but never IOException. Yet you're forced to handle IOException which doesn't happen. And that's the issue with checked exceptions.There's no correspondence between checked exceptions and likelihood of their occurence. NullPointerException probably happens more than any checked exception. The division between checked exceptions and unchecked exceptions is absolutely arbitrary and makes sense only at caller place, never in called function signature.
> Modules absolutely achieved their primary goal: stopping libraries from accessing JDK internals without the application's knowledge. The ecosystem is slow on the uptake since split packages and access to internal APIs is endemic, but it is happening ever so slowly. I wish libraries could opt into not being part of the unnamed module.
"Slow" is an understatement. I don't see this happening at all. Last time I tried to write very simple application with modules, I spent so many hours banging my head over various walls, that I probably will not do another attempt in a foreseeable future.
https://www.reddit.com/r/java/comments/1o37hlj/reopening_the...
This provides no automatic verification that indeed all likely error situation that can and should be handled were indeed handled. The very idea is that you have to opt in to not handle a checked exceptions. Result types don't carry a stack trace; apart from that I'm not convinced that they are interently better. In fact, I'd argue that handling a Result and an exception looks much the same in imperative code.
> When I'm writing reading data from resource stream, the data that's located next to my class files, IO Exceptions are really unlikely to happen.
Java class loaders can do anything including loading resources from the network. Which is admittedly not that common these days after the demise of applets.
> ByteArrayInputStream -> ByteArrayOutputStream
The general assumption behind IO interfaces is that the operations might fail. These two classes are oddballs in that sense. Note that the other write methods in `ByteArrayOutputStream` don't declare checked exceptions.
Since the compiler cannot prove that an exception will never be thrown (essentially due to Rice's theorem) there are always going to be false positives. The issues with checked exceptions therefore boil down to API design and API abuse.
Re Errors: the programmer cannot do anything about it and might make matters worse by trying to do so. Preventing an OutOfMemoryError relies on whole-system design so peak memory consumption is kept under control. Also the StackOverflowError, can in no way be prevented nor handled by the caller. Therefore both of them are `Error`s, not `Exception`s.
> NullPointerException probably happens more than any checked exception.
Patently untrue, as network connections break down and files cannot be accessed all the time.
The NullPointerException indicates a bug in the application. By the very reason it occurs, the current thread cannot continue execution normally. After a checked exception, it very much might. Though I would very much like to not have to handle exceptions in static initializer blocks - there is no good way to react to any problem happening there.
> "Slow" is an understatement. I don't see this happening at all.
All of this is slow-moving, I completely agree, but due to backwards compatibility concerns the ecosystem cannot be weaned off the issues that the JPMS is designed to prevent in a short time.
And some people write code in Python which provides no automatic verification whatsoever.
Actually unchecked exceptions are very similar to dynamically typed languages. And that's fine. As Python and other languages proved by their mere existence: dynamic typing is not inherently bad. Sometimes it's good. I think that for error handling, it's good.
> Java class loaders can do anything including loading resources from the network. Which is admittedly not that common these days after the demise of applets.
Technically they can, but in my particular case I know very well, that my resources are residing inside of JAR file. And if that JAR file happened to reside on the failed HDD block, that's not going to be a recoverable error.
When we're talking about IO Exceptions, it's almost always failed operation which requires complete abort. It's either failed hardware, or, typically, disconnected client. Can't do much about it, other than clean up and proceed to the next client.
And the same could be said about SQL Exceptions. Like 99% of SQL exceptions are application bugs which are not going to be handled in any way other than wind up and return high level HTTP 500 error or something like that. There are cases when SQL exception should be handled, but those are rare. Yet JDBC developers decided that programmers must execute error handling rituals on every call site.
> The NullPointerException indicates a bug in the application. By the very reason it occurs, the current thread cannot continue execution normally.
That's not exactly true. In JVM, NullPointerException is absolutely well defined and you can continue execution after catching it. You might suspect, that logical application state is corrupted, but sometimes you know well that everything's fine (and in most cases you hope that everything's fine, if your Spring MVC handler threw NPE, Spring is not going to crash, it'll continue to serve requests). It's not C++ with its undefined stuff. JVM is pretty reliable when it comes to every error, including NPE, stack overflow or OOM. Latter is special, because even handling error might prove challenging, when memory allocations fail, but JVM will not hang up or crash.
Python is a language with almost no static validation whatsoever. It would be very odd if it cared about checked exceptions. This dynamism makes big Python code bases infuriating to work with.
> When we're talking about IO Exceptions, it's almost always failed operation which requires complete abort. It's either failed hardware, or, typically, disconnected client. Can't do much about it, other than clean up and proceed to the next client.
If this is the case then the solution is to add it to the `throws` list.
> That's not exactly true. In JVM, NullPointerException is absolutely well defined and you can continue execution after catching it.
Why would I catch a NullPointerException instead of fixing my application? The JVM is indeed completely fine, but processing still cannot continue because that code simply does not exist.
I think streams are a great example of what I was saying about Java failing to take advantage of coming last. Scala (probably among others, but Scala was right there on the JVM) had already demonstrated that it was possible to enable simple, readable code for simple use cases, while also enabling complex and powerful usage. And the experience of Scala had shown that there's little demand for parallel collections outside of extremely niche use cases where people tend to use specialized solutions anyway. Somehow Java, with this example staring them in the face, managed to get the worst of both worlds.
I always believed it was a major plus point for Java compared to other languages. I am even surprised to hear otherwise. How should parallel processing of streams work in your opinion, then ? Just saying it be unsupported would be laughable considering hardware today.
I would rate this feature 9/10. The fact that the author has rated it 1/10, shows he hasn't really worked on large, parallel processing of data - in Java anyways.
There might be use-cases, but I've yet to encounter them.
And when I need parallel computation, I can just use good old ExecutorService. Few more lines, but that's OK for a task that arises once in a 10 years.
It might only be one language, but it’s a pretty big one
Instead of a config class and a bunch of apps with @Inject Config config;, we'd have giant *Config classes. Each one would have lots of methods like:
@Bean public fooProducer(FooConfig config, BazProvider provider, BarProvider barProvider, SoapClient soapClient) {...}
Want to know how they were produced? Find usages on the class' constructor.
The magic @Inject and @Autowired annotations don't seem worse than that to me.
Mandatory personal anecdote:
I'm not a java guy, but I've been around java since '99, and few years ago I was moved to a strictly java team. Upon introduction I decided to show them my party trick and implemented live a pseudo "wolf3d" in one day. As usual, java devs were sort of impressed by the fact that you can do graphics and user input in java, because nowadays that's extremely rare for most of them. I got my approval and in return I asked them to give me a quick one day deep dive into Spring.
At the end I was presented with a basic hello world project that was comprised mostly of... EMPTY CLASSES I mean literally, class Foo {} END OF FILE!
Of course these empty classes had at least 5 lines of annotations on top of class declaration and in the end it somehow pushed the framework into the right direction, but oh my fucking god, I was throwing up in my mouth for the rest of the week.
The true explanation, at least the way OpenJDK says it, is that designing language features is more complex than a casual glancer can fathom, and there's 30 years of "Java is in the top 5 most used languages on the planet, probably #1 especially if focussing on stuff that was meant to be supported for a long time" to think about.
From personal experience, essentially every single last "Just do X to add (some lang feature) to java; languages A and B do it and it works great!" would have been bad for java. Usually because it would cause a 'cultural split' - where you can tell some highly used library in the ecosystem was clearly designed before the feature's introduction.
Even if you introduce a new feature in a way that doesn't break existing code, it's still going to cause maintainability headaches if you've cornered the pillars of the ecosystem into total rewrites if they want to remain up to date with the language. Because they will (or somebody will write an alternative that will) and you've _still_ 'python 2 v python 3'd the language and split the baby in the half.
For what its worth, I think the OpenJDK team doesn't take this seriously enough, and a number of recently introduced features have been deployed too hastily without thinking this through. For example, `LocalDate`, which has 'this should be a record' written all over it, is not a record. Or how the securitymanager is being ditched without replacements for what it is most commonly used for here in the 2020s. (To be clear: Ditching it is a good idea, but having no in-process replacement for "let me stop attempts to access files and shut down the JVM, not for security purposes but simply for 'plan B' style fallback purposes" - that's a bit regrettable).
I'm nitpicking on those points because on the whole OpenJDK is doing a far better job than most languages on trying to keep its ecosystem and sizable existing codebase on board _without_ resorting to the crutch of: "Well, users of this language, get to work refactoring everything or embrace obsoletion".
Eventually, I guess there'll be backwards compatible "pattern extractors" functionality retrofittable to existing "record-like" classes. This has been hinted at on several occasions.
Yes, exactly - now you're getting it. Or rather I get the feeling I failed to explain it well.
ArrayList predates generics.
However, ArrayList does have generics.
That's because generics were added to the language in a 'culturally backwards compatible' way: Existing libraries (i.e. libraries that predate the introduction of generics, such as ArrayList) could modify themselves to add support for generics in a way that is backwards compatible for the library: Code written before they added it continues to work and compile fine even against the new release that added generics.
The same principle applied to records would mean that LocalDate could have been updated to turn into a record in a way that is backwards compatible.
And it really works that way.. almost. You really can take an existing class (defined with `class`) and change it into a record (defined with `record`) and all existing code continues to work just fine. However, this means all your properties now neccessarily get an accessor function that is named after the property. And that is a problem for LocalDate specifically: It already has accessors and they are named e.g. `getYear()`. Not `year()`. That means if LocalDate were to be rewritten as a record, one of two very nasty options must be chosen:
* Break backwards compatibility: As part of upgrading code you must change all calls to `.getYear()` into calls to `.year()`. It's a total ecosystem split: Every dependency you use comes in 2 flavours, one with calls to getYear and one with year, and you must use the right ones. This is truly catastrophic.
* Have both methods. There's year() and also getYear() and they do the same thing. Which is the lesser evil by far, but it makes very clear that LocalDate predates `record`. Contrast to ArrayList: It is not all that obvious that ArrayList predates generics. It does, but if you were to design ArrayList from scratch after generics are introduced you'd probably make the same code. Maybe the signature of `remove` would have been `remove(T)` instead of the current `remove(Object)`.
Instead, obviously then, the best choice is to not make it a record. And that's my point: The best possible (possibly here perfection is the enemy of good, but, I'd have done it differently) way to deploy records would have included some way for localdate to turn into a record without the above dilemma.
Perhaps simply a way to explicitly write your accessors with some marker to indicate 'dont generate the default `year()` - THIS is the accessor for it'.
Had that feature been part of record, then LocalDate could have turned into one in a way that you can't really tell.
Date flat out doesn’t. We needed something in the standard library to fix that. It should’ve happened long before it did.
Can’t wait for destructuring support for classes, though.
Also there was a long period when changes were very lumpy - it could be multiple years for a feature to make it into the release, and anything that might screw up other features got a lot of pushback. Then other conventions/tools emerged that reduced the urgency (e.g. the Lombok stuff)
Edit: I should add that it's now on a fixed 6-monthly release cycle which IMO works much better.
For example, java is somewhat unique in having lambda syntax where the lambda *must* be compile-time interpretable as some sort of 'functional type' (a functional type being any interface that defines precisely 1 unimplemented method). The vast, vast majority of languages out there, including scala which runs on the JVM, instead create a type hierarchy that describe lambdas as functions, and may (in the case of scala for example) compile-time automatically 'box'/'cast' any expression of some functional type to a functional interface type that matches.
Java's approach is, in other words, unique (as far as I know).
There was an alternate proposal available at the time that would have done things more like other languages does them, completely worked out with proof of concept builds readily available (the 'BGGA proposal'). The JVM would autogenerate types such as `java.lang.function.Function2<A, B, R>` (representing a function that takes 2 arguments, first of type A second of type B, and returns a value of type R), would then treat e.g. the expression:
`(String a, List<Integer> b) -> 2.0;`
As a `Function2<String, List<Integer>, Double>`, and would also 'auto-box' this if needed, e.g. if passing that as the sole argument to a function:
``` void foo(MyOperation o) {}
interface MyOperation { Double whatever(String arg1, List<Integer> arg2); } ```
This proposal was seriously considered but rejected.
The core problem with your comment is this:
Define the terms "polished" and "elegant". It sounds so simple, but language features are trying to dance to quite a few extremely different tunes, and one person's 'elegance' is another person's 'frankensteinian monster'.
The same mostly goes for your terms "beauty" and "charm", but, if I may take a wild stab in the dark and assume that most folks have a very rough meeting of the minds as to whatever might be a "charming" language: I know of no mainstream long-term popular languages that qualify for those terms. And I think that's inherent. You can't be a mainstream language unless your language is extremely stable. When you're not just writing some cool new toy stuff in language X - you're writing production code that lots of euros and eyeballs are involved in, and there's real dependence on that software continuing to run, then you __must__ have stability or it becomes extremely pricey to actually maintain it.
With stability comes the handcuffs: You need to use the 'deprecation' hammer extremely sparingly, essentially never. And that has downstream effects: You can't really test new features either. So far I have not seen a language that truly flourishes on the crutches of some `from future import ...` system. That makes some sense: Either the entire ecosystem adopts the future feature and then breaking _that_ brings the same headaches, or folks don't use these features / only for toy stuff, and you don't get nearly the same amount of experience from its deployment.
Said differently: If java is a frankenstein, so is Javascript, C#, Python, Ruby, Scala, and so on. They have to be.
I'd love to see a language whose core design principles are 100% focussed on preventing specifically that. Some sort of extreme take on versioning of a language itself that we haven't seen before. I don't really know what it looks like, but I can't recall any language that put in the kind of effort I'd want to see here. This is just a tiny sliver of what it'd take:
* The language itself is versioned, and all previous versions continue to be part of the lang spec and continue to be maintained by future compilers. At least for a long time, if not forever.
* ALL sources files MUST start with an indication about which version of the language itself they use.
* The core libraries are also versioned, and separately. Newer versions are written against old language versions, or can be used by source on old language versions.
* The system's compilers and tools are fundamentally operating on a 'project' level granularity. You can't compile individual source files. Or if you can, it's because the spec explains how a temporary nameless project is implied by such an act.
* All versions ship with a migrator tool, which automatically 'updates' sources written for lang ver X to lang ver X+1, automatically applying anything that has a near-zero chance of causing issues, and guiding the programmer to explicitly fixing all deprecated usages of things where an automated update is not available.
* The language inherently supports 'facades'; a way for a library at version Y to expose the API it had at version X (X is older than Y), but using the data structures of Y, thus allowing interop between 2 codebases that both use this library, one at version X and one at version Y.
That language might manage the otherwise impossible job of being 'elegant', 'simple', 'mainstream', 'suitable for serious projects', and 'actually good'.
Clear language code should be endeavor to be readable/understandable when printed on a sheet of paper by anyone, acceptable code should be understandable by anyone who knows a bit about the technologies and has some IDE support.
Garbage code is what you have when the code in question is only understandable when you actually run it, as it uses arbitrary framework logic to wire things together based on metadata on the fly.
no single language is ideally suited for every situation, it's not inherently a sign of failure that someone makes a DSL.
and since annotations are part of the language, this is still all "the language is flexible enough to build the framework [despite being wildly different than normal code]" so I don't think it even supports that part.
I was just starting real programming, I knew naming was hard so I was using thesaurus almost as extensively - if not more - than the reference manual.
But his work defined designing API for me for life. Stuff we take for granted, and we often overlook as seemingly trivial.
Let's say you have a collection type that has a method ``put``. It takes two arguments - an object you want to insert, and an index you want to put it at. Which argument should go first? Could index be optional? What value should it default to? Does the function returns anything? A boolean to indicate whether insertion was successful? Or the index at which the object was put? If latter how you indicate an error?
All of these seems seemingly trivial but he and his team worked on that library for over a year and he throughly documented their work in series of presentations.
And we can't forget about his java puzzlers, absolute gem.
I highly recommend the Growing the Java Language talk by Brian Goetz to anyone who's interested in the philosophy behind evolving the modern Java language [1]. And Don’t be misled by the title, it’s not just about Java, it’s about software design.
As mentioned in TFA, "The general advice seems to be that modules are (should be) an internal detail of the JRE and best ignored in application code"
So yeah, why expose it to those who are not the "main customer"?
How did modules affect you as a user? I'd guess that you had to add `--add-opens`/`--add-exports` during one of the JDK migrations at some point. And the reason you had to do it was that various libraries on your classpath used JDK internal APIs. So modules provided encapsulation and gave you an escape hatch for when you still have to use those libraries. How else would you do it while still achieving the desired goal?
Ever wrote "List" in Intellij and instead of importing "java.util.List" Intellij prompts you to choose between like twenty different options from all the libraries you have included in your classpath that have implemented a public class named "List"? Most likely 90% of the libraries did not even want to expose their internal "List" class to the world like that but they got leaked into your classpath just because java didn't have a way to limit visibility of classes beyond packages.
The only way of doing this would be to put all classes in the same package. Any nontrivial library would have hundreds of classes. How is that a practical solution?
So for most people, the initial impression of modules is negative, and then they just decided to rule the feature out completely. This has created a sea of useless criticism, and any constructive criticism is hardly observed. Improvements to module configuration (combine it with the classpath), would go a long way towards making modules "just work" without the naysayers getting in the way.
Is it even theoretically possible for a project like this to not run into these kind of issues? Like literally the project's goal is to enable library authors to be more explicit about their public API. So breaking use cases that use unsupported backdoor APIs very much seems like a predictable and expected result?
AFAIK, there's still no replacement for sun.misc.Signal and sun.misc.SignalHandler, so I think "that was all fixed" is false.
Surely almost everyone who has worked in a large enough codebase and thought about large-scale modularity can see the use case for a unit of abstraction in java higher than a package?
Given how much of a coach and horses modules drove through backwards compatibility it also kind of gives the lie to the idea that that explains why so many other language features are so poorly designed.
Oracle hates that people build fat-jars and refuses to adress the huge benefit of single file deployables.
(1) It was the first disruptive enterprise business model. They aimed to make everyone a Java programmer with free access (to reduce the cost of labor), but then charge for enterprise (and embedded and browser) VM's and containers. They did this to undercut the well-entrenched Microsoft and IBM. (IBM followed suit immediately by dumping their high-end IDE and supporting the free Eclipse. This destroyed competition from Borland and other IDE makers tying their own libraries and programming models.)
(2) As an interpreted language, Java became viable only with good JIT's. Borland's was the first (in JDK 1.1.7), but soon Urs Holzle, a UCSB professor, created the HotSpot compiler that has seeded generations of performance gains. The VM and JIT made it possible to navigate the many generations of hardware delivering and orders-of-magnitude improvements and putting software in every product. Decoupling hardware and software reduced the vertical integration that was killing customers (which also adversely affected Sun Microsystems).
btw, Urs Holzle went on to become Google employee #8 and was responsible for Google using massively parallel off-the-shelf hardware in its data centers. He made Google dreams possible.
Does anyone remember the full page ads in WSJ for programming language, that no on quite yet knew what it really was? So my formative impressions of Java on were emotional/irrational, enforced by comments like:
“Of Course Java will Work, there’s not a damn new thing in it” — James gosling, but I’ve always suspected this might be urban legend
“Java, all the elegance of C++ syntax with all the speed of Smalltalk” - Kent Beck or Jan Steinman
“20 years from now, we will still be talking about Java. Not because of its contributions to computer programming, but rather as a demonstration of how to market a language” — ??
I can code some in Java today (because, hey, GPT and friends!! :) ), but have elected to use Kotlin and have been moderately happy with that.
One thing that would be interesting about this list, is to break down the changes that changed/evolved the actual computation model that a programmer uses with it, vs syntactic sugar and library refinements. “Languages” with heavy footprints like this, are often just as much about their run time libraries and frameworks, as they are the actual methodology of how you compute results.
When IT asked us if our application worked on Mac, we shrugged and said "We don't have a Mac to test it. We've never run it on a Mac. We won't support it officially, so if there are Mac specific bugs you're on your own. But... it should work. Try it."
And it did work. All the Mac users had to do was click on our Webstart link just like the PC users. It installed itself properly and ran properly. Never had a single issue related to the OS. Before Java was introduced that was an unobtainable dream in a full-featured windowed application.
I love GPT. Such a marvellous tool. Before ChatGPT came along, I had no medical experience. Thanks to GPT and friends, I am now a doctor. I've opened a clinic of my own.
I really like the feature, and it's really one of the features I feel Java got right.
The syntax is very expressive, and they can easily be made to generate meaningful exceptions when they fail.
It's also neat that it gives the language a canonical way of adding invariant checks that can be removed in production but run in tests or during testing or debugging (with -da vs -ea).
You could achieve similar things with if statements, and likely get similar performance characteristics eventually out of C2, but this way it would be harder to distinguish business logic from invariant checking. You'd also likely end up with different authors implementing their own toggles for these pseudo-assertions.
In C, asserts are used as sanity checks, and when one is violated, there's often reasonable suspicion that that memory corruption has occurred, or that memory corruption will occur if the code proceeds in the current state. Aborting the process, leaving a core dump for analysis, and starting fresh is often the safest thing to do to avoid the unpredictable results of corrupted state, which can be insidiously subtle or insanely dramatic. In my experience writing server-side C++, we always ran it in production with asserts enabled, because code that continued to run after memory was corrupted led to the the bugs that were the most destructive and the most mysterious.
Memory corruption is rare enough in Java that 99.9% of code completely ignores the possibility. Also, if you did suspect memory corruption in a Java program, an assert wouldn't help, because it would only throw a runtime exception that would probably get caught and logged somewhere, and the process would continue serving requests or whatever else it was doing.
I don't know why they're not more popular.
For some things like requiring arguments to be non-null static checks with annotations have superseded them (in a confusing way inevitably - I think there are three different common non-nullness annotations).
There's Guava and its Preconditions class[0] that is approximately as terse and I find to be more helpful than everything-is-an-AssertionError.
[0] https://guava.dev/releases/14.0/api/docs/com/google/common/b...
Overall I agree with you. They are significantly better, even if a little verbose, than not having them. Love cleaning up old loops with a tiny stream that expresses the same thing more compactly and readably.
He’s also right on the parallel benefits not really being a thing I’ve ever seen used.
> Java Time: Much better than what came before, but I have barely had to use much of this API at all, so I’m not in a position to really judge how good this is.
Again, it is hard to overstate just _how_ bad the previous version is.
Though honestly I still just use joda time.
The original Java Time classes were likely a last-minute addition to Java. They were obviously a direct copy of C language time.h. It feels as if the Java team had a conversation like this: "Darn, we ship Java 1.0 in a month but we forgot to include any time functions!" "Oh no! We must do something!" "I know, let's just port C time.h!"
When you didn’t have collections everything was a complete pain. But after they were added you still had to cast everything back to whatever type it was supposed to be when you got stuff out of collections. Which was also a huge pain.
I know all the arguments about how genetics weren’t done “correctly“ in Java. I’ve run into the problems.But I’m so glad we have them.
So many things, even if they came much later, are somehow much worse.
Integer a = null;
int b = 42;
if (a == b) {} // throws NullPointerException
Short w = 42;
Short x = 42;
out.println(w == x); // true
Short y = 1042;
Short z = 1042;
out.println(y == z); // false
It turns out, somewhere in the auth path, a dev had used `==` to verify a user's ID, which worked for Longs under (I believe) 128, so any users with an ID bigger than that were unable to log in due to the comparison failing.
For performance reasons boxed Short objects are interned when they represent values in the range -127 to +128 so for 42 the pointers will point to the same interned object after 42 is autoboxed to a Short. Whereas 1042 is outside this interning range and the autoboxing creates two distinct objects with different pointers.
It's very simple but (a) non-obvious if you don't know about it and (b) rather wordy when I spell it out like this :)
In general in Java you want obj.equals(other) when dealing with objects and == only with primitives, but autoboxing/unboxing can cause confusion about which one is dealing with.
In other other words, the surprise ought to be that w == x is true, not that y == z is false!
Are there linters for this sort of thing? I don't write Java much any more.
Yes and they're pretty good so it's rarely an issue in practice. Using == on object references will indeed usually get you yelled at by the linter.
I’ll be happy when it’s fixed.
I will acknowledge that the interface is a bit weird, but I feel like despite that it has consistently been a "Just Works" tool for me. I get decent performance, the API is well documented, and since so many of my coworkers have historically been bad at it and used regular Java IO, it has felt like a superpower for me since it makes it comparatively easy to write performant code.
Granted, I think a part of me is always comparing it to writing raw epoll stuff in C, so maybe it's just better in comparison :)
I really wish Javadoc was just plain text that honored line breaks. I really don’t care about the fact I can put HTML in there, that just seems dumb to me. I get you can’t remove it but I would be happy if you could.
I do like markdown. But I don’t see myself ever using it in a Javadoc.
Markdown in javadoc is at least 7/10 for me. Improves comment readability for humans while allowing formatted javadocs.
Servlets (Together with MS ASP, JSP/Servlets have fuelled the e-commerce websites)
I think Java dominated the scene mostly because of its enterprise features (Java EE) and the supporting frameworks (Spring etc) and applications (Tomcat, Websphere, Weblogic etc) and support from Open source (Apache, IBM)
It is amazing they haven’t made a special type for that. I get they don’t want to make unsigned primitives, though I disagree, but at least makes something that makes this stuff possible without causing headaches.
Having more type conversion headaches is a worse problem than having to use `& 0xff` masks when doing less-common, low-level operations.
The same way you pass a 64-bit integer to a function that expects a 32-bit integer: a conversion function that raises an error if it's out of range.
When trying to adapt a long to an int, the usual pattern is to overload the necessary methods to work with longs. Following the same pattern for uint/int conversions, the safe option is to work with longs, since it eliminates the possibility of having any conversion errors.
Now if we're taking about signed and unsigned 64-bit values, there's no 128-bit value to upgrade to. Personally, I've never had this issue considering that 63 bits of integer precision is massive. Unsigned longs don't seem that critical.
I think the only answer would be you can’t interact directly with signed stuff. “new uint(42)” or “ulong.valueOf(795364)” or “myUValue.tryToInt()” or something.
Of course if you’re gonna have that much friction it becomes questionable how useful the whole thing is.
It’s just my personal pain point. Like I said I haven’t had to do it much but when I have it’s about the most frustrating thing I’ve ever done in Java.
Fast forward a few years later, and I'm actually at a C# shop.
Fast forward a decade, I'm at the same shop. I adore C# and I fondly remember my foray into Java.
I left Java around the time Streams were becoming a thing. I thought it looked like a mess, and then I ran into LINQ in C# land. Swings (pun intended) and roundabouts.
But astonished that Optional isn't mentioned either there or in the comments. A second way to represent no-value, with unclear and holy-war-ushering guidance on when to use, and the not exactly terse syntax I see everywhere:
Optional<Ick> ickOpt = Optional.ofNullable(ickGiver.newIck()); ickOpt.flatMap(IckWtfer::wtf).ifPresentOrElse((Wtf wtf) -> unreadable(wtf)), () -> { log.warn("in what universe is this clearer than a simple if == null statement?!"); });
We use NullAway and I just never use Optional unless it really, really makes sense.
Ick1 result1 = potentiallyNull1();
Ick2 result2 = (result1 == null) ? null : potentiallyNull2(result1);
Ick3 result3 = (result2 == null) ? null : potentiallyNull3(result2);
vs Ick3 result3 = potentiallyNone1()
.flatMap(potentiallyNone2)
.flatMap(potentiallyNone3);
You could maybe move the null check inside the method in the former and it cleans it up a bit, but in the latter you can have methods that are explicitly marked as taking NonNull in their type signature which is nice.(Today, even though I still C++, C, along with Java, I'll challenge anyone who claims that Java is slower then C++.)
But even for I/O bound applications it still feels slow because excessive memory usage means more swap thrashing (slowing down your entire OS), and startup time suffers greatly from having to fire up VM + loading classes and waiting for the JIT to warm up.
I can start a C/C++/Rust based web server in under a second. The corresponding server in Java takes 10 seconds, or minutes once I have added more features.
The release of HotSpot was in 1999, and became default with JDK 1.3 in 2000. It took JIT compilation to the next level, making tools like GCJ mostly obsolete.
BeanFactoryBuilder builder = new BeanFactoryBuilder(...);
This is just straight up a duplicate. With generics, generic parameters can be left out on one side but the class itself is still duplicated.Simply, I like (mind, I'm 25 year Java guy so this is all routine to me) to know the types of the variables, the types of what things are returning.
var x = func();
doesn't tell me anything.And, yes, I appreciate all comments about verbosity and code clutter and FactoryProxyBuilderImpl, etc. But, for me, not having it there makes the code harder for me to follow. Makes an IDE more of a necessity.
Java code is already hard enough to follow when everything is a maze of empty interfaces, but "no code", that can only be tracked through in a debugger when everything is wired up.
Maybe if I used it more, I'd like it better, but so far, when coming back to code I've written, I like things being more explicit than not.
var myPotato = new PotatoBuilder.build();
not like var myFood = buyFood();
where buyFood has Potato as return type.2. Even if you don't follow 1, IDEs can show you the type like
var Potato (in different font/color) myFood = buyFood();
It does help when writing:
var x = new MyClass();
Because then you avoid repetition. Anyways, I don't ever use "var" to keep the code compatible with Java-8 style programming and easier on the eyes for the same reasons you mention.But I think that's easily solved by adding type annotations for the return type of methods - annotating almost anything else is mostly just clutter imo.
so fibonacci could look like this
```
let rec fib n =
match n with
| 0 -> 1
| 1 -> 1
| _ -> fib (n - 1) + fib (n - 2)
```or with annotations it becomes this:
```
let rec fib (n: int): int =
// Same as above :)
```It exists. It’s fine. People obviously like it.
Some don’t, I’m one of them. I don’t see the advantage is very big at all. I don’t think it’s worth the trouble.
But that’s me.
Sometimes I doubt most hacker news commentors have ever worked in big corpo environments where you have to regularly review large PR in some broken webapp like GitHub.
Generally, you save some keystrokes to let other people (or future you) figure it out when reading. It seems like bad practice altogether for non trivial projects.
Those keystrokes are not just saved on writing, they make the whole code more legible and easier to mentally parse. When reading I don't care if the variable is a specific type, you're mostly looking whats being done to it, knowing the type becomes important later and, again, the IDE solves that for you.
The word "String" "Integer" et al. + "var" is too much real estate for being explicit. Sometimes, I'm looking at the decompiled source from some library that doesn't have a source package available.
> Those keystrokes are not just saved on writing, they make the whole code more legible and easier to mentally parse.
This is incorrect. Repeating it doesn't make it true. For trivial code (<10 lines) probably seems fine at the time. Lots of bad practices start with laziness.
Changing practice because an author thinks a function is small enough when it was written, is a recipe for unclean code with no clear guidelines on what to use or expect. Maybe they rather put the onus on a future reader; this is also bad practice.
That has got to be one of the most useful recent features. :-)
The pleasure of just copying and paste text in plain ASCII that looks as intended rather than a huge encoded mess of "\r\n"+ concatenations.
But ok, I'm just an ASCII art fan. ^_^
“to break up “ +
“SQL statements” +
“like this for readability “ +
“thus making them hard to edit “ +
“was incredibly useful at my job.”;
(Note: I put a subtle bug in there because it always happened)
SQL injection is horrible, but people were managing to do that all these years after prepared statements anyway without text blocks. I really don’t think they made things worse. Same thing with embedding HTML in the code. They were gonna do it anyway.
I have never worked with Java. What is this? Why would one want to have a class for an Integer?
If a primitive value must be treated as an object (e.g., when stored in a Java Collection like ArrayList or when passed to a method that requires an object), Java uses a process called `boxing` to wrap the primitive value into an instance of its corresponding Wrapper class (e.g., Integer, Boolean, Double). These Wrapper objects are allocated on the heap and do possess the necessary object header, making them subject to the GC's management.
i.e. something like:
Integer x = 42
highlyQuestionableCode(x);
println(x); // "24" WAT?
I'm a fan of JEP-500... // Before autoboxing
list.add(new Integer(42));
// After autoboxing
list.add(42);
Mostly it's a non-issue now. If you're desperately cycle/memory constrained you're likely not using Java anyway.Edit: actually, if someone here is using it for something like that I'd love to hear the rationale...?
It's mostly a trade-off. Java's tooling, reliability and ecosystem is some of the best around. Even though building high performance software in Java is a bit of a pain, looking at the bigger picture, it's often still worth it.
Java Cards would like to have a word with you. But yeah I know what you mean.
That’s also very likely changing. Lookup “project Valhalla”. It’s still a work in progress but the high level goal is to have immutable values that “code like a class, work like an int”.
PS When I say “changing”: it’s being added. Java tries hard to maintain backward compatibility for most things (which is great).
(I know the irony of Spring is that it became what it replaced. But it got a good ten or fifteen years of productivity before it began getting high on its own supply. )
As someone who has worked on code bases that did not have spring that really should have and had to do everything manually: when used well it’s fantastic.
Now people can certainly go majorly overboard and do the super enterprise-y AbstractBoxedSomethingFactoryFacadeManagerImpl junk. And that is horrible.
But simple dependency injection is a godsend. Ease of coding my just adding an annotation to get a new component you can reference anywhere easily is great. Spring for controllers when making HHTP endpoints? And validation of the data? Love it!
Some of the other modules like Spring Security can be extremely confusing. You can use the Aspect Oriented Programming to go overboard and make it nearly impossible to figure out what the hell is happening in the program.
Spring is huge, and it gets criticized for tons of the more esoteric or poorly designed things it has. But the more basic stuff that you’re likely to get 90+ percent of the value out of really makes things a lot better. The relatively common stuff that you’ll see in any Spring Boot tutorial these days.
Passing things down layer after layer gets old. High level stuff takes tons of parameters due to all the layers below.
You end up with God objects that mostly just contain every other object someone might want to reference.
And you know what? That object starts to feel like a great place to put state. Cause it’s already being passed everywhere.
So instead of using Spring to get a ThingService to work with your Thing at the spot you need it, suddenly all the code has access to all the stuff and states. And like the temptation of The One Ring programmers use it.
Now you have a spaghetti rats nest. Where is the code that deals with the state for a Gloop? It’s now everywhere. Intertwined with the code for a Thing. And a Zoob. It doesn’t need to be. But it is.
It becomes almost impossible to unit test things. Because everything can do/see everything. Untangling and extracting or replacing any part of the whole is a Herculean job.
You don’t need Spring for something tiny. And maybe it’s possible to go without dependency injection in a large app and keep things manageable and easy to understand.
In my career I’ve mostly seen the mess. I’ve helped try to untangle it by slowly introducing Spring.
I’d rather have it, with its organization and standard patterns, than the Wild West of whatever any programmer or junior decided to do over the last 15 years and how that has evolved. In a complex application with lots of programmers coming and going I find it a net benefit.
So you DO like DI, you just do it explicitly. Which is fine.
Before, you used to write "loosely coupled" software by decoupling your business logic from your IO to keep it testable. You could take virtually anything worth testing, 'new' it in a unit test, and bob's your uncle.
Now you write "loosely coupled" software by keeping the coupling between components, but also couple them to a bunch of Spring dependencies too (check your imports!). Now you can't instantiate anything without Spring.
He would have been better served by opening a poll - that would have opened his eyes to the use of these features.
Its like giving operator overload a rating of 1 in C++/Python. Sure, if you don't find any need for it in your domain, it would look stupid to you.
22 was March 2024
23 was September 2024
24 was March 2025
25 was September 2025
This is much better than the old "release train" system where e.g Java 5 and Java 6 were released in Sept 2004 and Nov 2006 respectively!Very strange reasoning and even stranger results: Streams 1/10?! Lambdas (maybe the biggest enhancement ever) a mere 4/10?!
Sorry, but this is just bogus.
Really prefer to have more lines of code and understanding very clearly what each one is doing, than convoluting too many instructions on a single line.
Especially when writing JavaFX code which is full of callbacks and event handlers I really don't see any other (useful) option.
Can lambdas be misused? Of course they can - but so can every other code construct.
It's also a little convoluted to work with different types of data.
For this one, I wish they would have taken a bit more inspiration from other languages and spent the time to make it more readable.
That said, I generally like streams a lot, and they do reduce the amount of branching, and having less possible code execution points makes testing easier too.
It hasn't worked out in terms of delivering perfect language design, but it has worked out in the sense that Java has an almost absurd degree of backward compatibility. There are libraries that have had more breaking changes this year than the Java programming language has had in the last 17 releases.
A different implementation of lambdas that allow for generic exceptions would probably solve it, but then that introduces other issues with the type system.
My other complaint is that the standard library didn’t have enough pre-made exceptions to cover common usecases.
> collecting all the thrown exceptions in a separate loop
It's really not comfortable to do so in Java since there is no standard `Either` type, but this is also doable with a custom collector.
This is true, but I think that it’s partly true because checked exceptions are cumbersome here. In my ideal world, the majority of functions would throw exceptions, testing cases that today are either missed or thrown as unchecked exceptions.
They exist since v1, which had very different philosophy than Java of 2010s-2020s. 1990s were an interesting time in language design and software engineering. People started reflecting on the previous experiences of building software and trying to figure out how to build better, faster, with higher quality. At that time checked exceptions were untested idea: it felt wrong not to have them based on previous experience with exceptions in C++ codebases, but there were no serious arguments against them.
So they added checked exceptions. That way you can see that a function will only ever throw these two types of exceptions. Or maybe it never throws at all.
Of course a lot of people went really overboard early on creating a ton of different kinds of exceptions making everything a mess. Other people just got into the habit of using RuntimeExceptions for everything since they’re not checked, or the classic “throws Exception“ being added to the end of every method.
I tend to think it’s a good idea and useful. And I think a lot of people got a bad taste in their mouth early on. But if you’re going to have exceptions and you’re not going to give some better way of handling errors I think we’re probably better off than if there were no checked exceptions at all.