I’ve also been experimenting with Go on a separate project and keep running into the opposite feeling — a lot of relatively common code (fetching/decoding) seems to look so visually messy.
E.g., I find this Swift example from the article to be very clean:
func fetchUser(id: Int) async throws -> User {
let url = URL(string: "https://api.example.com/users/\(id)")!
let (data, _) = try await URLSession.shared.data(from: url)
return try JSONDecoder().decode(User.self, from: data)
}
And in Go (roughly similar semantics) func fetchUser(ctx context.Context, client *http.Client, id int) (User, error) {
req, err := http.NewRequestWithContext(
ctx,
http.MethodGet,
fmt.Sprintf("https://api.example.com/users/%d", id),
nil,
)
if err != nil {
return User{}, err
}
resp, err := client.Do(req)
if err != nil {
return User{}, err
}
defer resp.Body.Close()
var u User
if err := json.NewDecoder(resp.Body).Decode(&u); err != nil {
return User{}, err
}
return u, nil
}
I understand why it's more verbose (a lot of things are more explicit by design), but it's still hard not to prefer the cleaner Swift example. The success path is just three straightforward lines in Swift. While the verbosity of Go effectively buries the key steps in the surrounding boilerplate.This isn't to pick on Go or say Swift is a better language in practice — and certainly not in the same domains — but I do wish there were a strongly typed, compiled language with the maturity/performance of e.g. Go/Rust and a syntax a bit closer to Swift (or at least closer to how Swift feels in simple demos, or the honeymoon phase)
The first line won't crash but in practice it is fairly rare where you'd implicitly unwrap something like that. URLs might be the only case where it is somewhat safe. But a more fair example would be something like
func fetchUser(id: Int) async throws -> User {
guard let url = URL(string: "https://api.example.com/users/\(id)") else {
throw MyError.invalidURL
}
// you'll pretty much never see data(url: ...) in real life
let request = URLRequest(url: url)
// configure request
let (data, response) = try await URLSession.shared.data(for: request)
guard let httpResponse = response as? HTTPURLResponse,
200..<300 ~= httpResponse.statusCode else {
throw MyError.invalidResponseCode
}
// possibly other things you'd want to check
return try JSONDecoder().decode(User.self, from: data)
}
I don't code in Go so I don't know how production ready that code is. What I posted has a lot of issues with it as well but it is much closer to what would need to be done as a start. The Swift example is hiding a lot of the error checking that Go forces you to do to some extent. let request = URLRequest(url: url)
let (data, response) = try await URLSession.shared.data(for: request)
// vs
let (data, response) = try await URLSession.shared.data(from: url)
That aside, your Swift version is still about half the size of the Go version with similar levels of error handling.The second one is for downloading directly from a URL and I've never seen it used outside of examples in blog posts on the internet.
func fetchUser(id int) (user User, err error) {
resp, err := http.Get(fmt.Sprintf("https://api.example.com/users/%d", id))
if err != nil {
return user, err
}
defer resp.Body.Close()
return user, json.NewDecoder(resp.Body).Decode(&user)
}Also those variables are returned even if you don't explicitly return them, which feels a little unintuitive.
I didn't look up this syntax or its rules, so I'm just reading the code totally naively. Am I to understand that the `user` variable in the final return statement is not really being treated as a value, but as a reference? Because the second part of the return (json.NewDecoder(resp.Body).Decode(&user)) sure looks like it's going to change the value of `user`. My brain wants to think it's "too late" to set `user` to anything by then, because the value was already read out (because I'm assuming the tuple is being constructed by evaluating its arguments left-to-right, like I thought Go's spec enforced for function arg evaluation). I would think that the returned value would be: `(nil, return-value-of-Decode-call)`.
I'm obviously wrong, of course, but whereas I always found Go code to at least be fairly simple--albeit tedious--to read, I find this to be very unintuitive and fairly "magical" for Go's typical design sensibilities.
No real point, here. Just felt so surprised that I couldn't resist saying so...
I think that for the user example it works because the NewDecoder is operating on the same memory allocation in the struct.
I like the idea of having named returns, since it's common to return many items as a tuple in go functions, and think it's clearer to have those named than leaving it to the user, especially if it's returning many of the same primitive type like ints/floats:
``` type IItem interface { Inventory(id int) (price float64, quantity int, err error) } ```
compared to
``` type IItem interface { Inventory(id int) (float64, int, error) } ```
but feel like the memory allocation and control flow implications make it hard to reason about at a glance for non-trivial functions.
It doesn’t set `user`, it returns the User passed to the function.
Computing the second return value modifies that value.
Looks weird indeed, but conceptually, both values get computed before they are returned.
async Task<User> FetchUser(int id, HttpClient http, CancellationToken token)
{
var addr = $"https://api.example.com/users/{id}";
var user = await http.GetFromJsonAsync<User>(addr, token);
return user ?? throw new Exception("User not found");
}The design goal of structured concurrency is to have a safe way of using all available CPU cores on the device/computer. Modern mobile phones can have 4, 6, even 8 cores. If you don't get a decent grasp of how concurrency works and how to use it properly, your app code will be limited to 1 or 1.5 cores at most which is not a crime but a shame really.
That's where it all starts. You want to execute things in parallel but also want to ensure data integrity. If the compiler doesn't like something, it means a design flaw and/or misconception of structured concurrency, not "oh I forgot @MainActor".
Swift 6.2 is quite decent at its job already, I should say the transition from 5 to 6 was maybe a bit rushed and wasn't very smooth. But I'm happy with where Swift is today, it's an amazing, very concise and expressive language that allows you to be as minimalist as you like, and a pretty elegant concurrency paradigm as a big bonus.
I wish it was better known outside of the Apple ecosystem because it fully deserves to be a loved, general purpose mainstream language alongside Python and others.
I disagree. Not every single article or essay needs to start from kindergarten and walk us up through quantum theory. It's okay to set a minimum required background and write to that.
As a seasoned dev, every time I have to dive into a new language or framework, I'll often want to read about styles and best practices that the community is coalescing around. I promise there is no shortage at all of articles about Swift concurrency aimed at junior devs for whom their iOS app is the very first real programming project they've ever done.
I'm not saying that level of article/essay shouldn't exist. I'm just saying there's more than enough. I almost NEVER find articles that are targeting the "I'm a newbie to this language/framework, but not to programming" audience.
(bracketed statement added by me to make the implied explicit)
This sums up my (personal, I guess) beef with coroutines in general. I have dabbled with them since different experiments were tried in C many moons ago.
I find that programming can be hard. Computers are very pedantic about how they get things done. And it pays for me to be explicit and intentional about how computation happens. The illusory nature of async/await coroutines that makes it seem as if code continues procedurally demos well for simple cases, but often grows difficult to reason about (for me).
Every time I think I “get” concurrency, a real bug proves otherwise.
What finally helped wasn’t more theory, but forcing myself to answer basic questions:
What can run at the same time here?
What must be ordered?
What happens if this suspends at the worst moment?
A rough framework I use now:
First understand the shape of execution (what overlaps)
Then define ownership (who’s allowed to touch what)
Only then worry about syntax or tools
Still feels fragile.
How do you know when your mental model is actually correct? Do you rely on tests, diagrams, or just scars over time?
This level is rocket science. If you can't tell why it is right, you fail. Such a failure, which was just a singular missing synchronized block, is the _worst_ 3-6 month debugging horror I've ever faced. Singular data corruptions once a week on a system pushing millions and trillions of player interactions in that time frame.
We first designed with many smart people just being adverse and trying to break it. Then one guy implemented, and 5-6 really talented java devs reviewed entirely destructively, and then all of us started to work with hardware to write testing setups to break the thing. If there was doubt, it was wrong.
We then put that queue, which sequentialized for a singular partition (aka user account) but parallelized across as many partitions as possible live and it just worked. It just worked.
We did similar work on a caching trie later on with the same group of people. But during these two projects I very much realized: This kind of work just isn't feasible with the majority of developers. Out of hundreds of devs, I know 4-5 who can think this way.
Thus, most code should be structured by lower-level frameworks in a way such that it is not concurrent on data. Once you're concurrent on singular pieces of data, the complexity explodes so much. Just don't be concurrent, unless it's trivial concurrency.
The idea that making things immutable somehow fixes concurrency issues always made me chuckle.
I remember reading and watching Rich Hickey talking about Clojure's persistent objects and thinking: Okay, that's great- another thread can't change the data that my thread has because I'll just be using the old copy and they'll have a new, different copy. But now my two threads are working with different versions of reality... that's STILL a logic bug in many cases.
That's not to say it doesn't help at all, but it's EXTREMELY far from "share xor mutate" solving all concurrency issues/complexity. Sometimes data needs to be synchronized between different actors. There's no avoiding that. Sometimes devs don't notice it because they use a SQL database as the centralized synchronizer, but the complexity is still there once you start seeing the effect of your DB's transaction level (e.g., repeatable_read vs read_committed, etc).
Same thing with goto and pointers. Goto kills structured programming and pointers kill memory safety. We're doing fine without both.
Use transactions when you want to synchronise between threads. If your language doesn't have transactions, it probably can't because it already handed out shared mutation, and now it's too late to put the genie in the bottle.
> This, we realized, is just part and parcel of an optimistic TM system that does in-place writes.
[1] https://joeduffyblog.com/2010/01/03/a-brief-retrospective-on...
In our present context, most mainstream languages have already handed out shared mutation. To my eye, this is the main reason so many languages have issues with writing asynch/parallel/distributed programs. It's also why Rust has an easier time of it, they didn't just hand out shared mutation. And also why Erlang has the best time of it, they built the language around no shared mutation.
I'd argue the default is that work _does_ move across system threads, and single-threaded async/await is the uncommon case.
Whether async "tasks" move across system threads is a property of the executor - by default C#, Swift and Go (though without the explicit syntax) all have work-stealing executors that _do_ move work between threads.
In Rust, you typically are more explicit about that choice, since you construct the executor in your "own" [1] code and can make certain optimizations such as not making futures Send if you build a single threaded one, again depending on the constraints of the executor.
You can see this in action in Swift with this kind of program:
import Foundation
for i in 1...100 {
Task {
let originalThread = Thread.current
try? await Task.sleep(for: Duration.seconds(1))
if Thread.current != originalThread {
print("Task \(i) moved from \(originalThread) to \(Thread.current)")
}
}
}
RunLoop.main.run()
Note to run it as-is you have to use a version of Swift < 6.0, which has prevented Thread.current being exposed in asynchronous context.[1]: I'm counting the output of a macro here as your "own" code.
Reading https://docs.swift.org/swift-book/documentation/the-swift-pr..., their first example is:
actor TemperatureLogger {
let label: String
var measurements: [Int]
private(set) var max: Int
init(label: String, measurement: Int) {
self.label = label
self.measurements = [measurement]
self.max = measurement
}
}
Here, the ‘actor’ keyword provides a strong hint that this defines an actor. The code to call an actor in Swift also is clean, and clearly signals “this is an async call” by using await: await logger.max
I know Akka is a library, and one cannot expect all library code to look as nice as code that has actual support from the language, but the simplest Akka example seems to be something like this (from https://doc.akka.io/libraries/akka-core/current/typed/actors...): object HelloWorld {
final case class Greet(whom: String, replyTo: ActorRef[Greeted])
final case class Greeted(whom: String, from: ActorRef[Greet])
def apply(): Behavior[Greet] = Behaviors.receive { (context, message) =>
context.log.info("Hello {}!", message.whom)
message.replyTo ! Greeted(message.whom, context.self)
Behaviors.same
}
}
I have no idea how naive readers of that would easily infer that’s an actor. I also would not have much idea about how to use this (and I _do_ have experience writing scala; that is not the blocker).And that gets worse when you look at Akka http (https://doc.akka.io/libraries/akka-http/current/index.html). I have debugged code using it, but still find it hard to figure out where it has suspension points.
You may claim that’s because Akka http isn’t good code, but I think the point still stands that Akka allows writing code that doesn’t make it obvious what is an actor.
https://github.com/apple/swift-distributed-actors is more like Akka, but with better guarantees from the underlying platform because of the first-class nature of actors.
- Robert Virding
Because it's extremely hard to retrofit actors (or, really, any type of concurrency and/or parallelism) onto a language not explicitly designed to support it from scratch.
The end result is a language that brings the worst of both worlds while not really bringing the benefits. An example I will give is SwiftUI, which I absolutely hate. You'd think this thing would be polished, because it's built by Apple for use on Apple devices, so they've designed the full stack from editor to language to OS to hardware. Yet when writing SwiftUI code, it's very common for the compiler to keel over and complain it can't infer the types of the system, and components which are ostensibly "reactive" are plagued by stale data issues.
Seeing that Chris Lattner has moved on from Swift to work on his own language, I'm left to wonder how much of this situation will actually improve. My feeling on Swift at this point is it's not clear what it's supposed to be. It's the language for the Apple ecosystem, but they also want it to be a general purpose thing as well. My feeling is it's never not going to be explicitly tied to and limited by Apple, so it's never really going to take off as a general purpose programming language even if they eventually solve the design challenges.
I get all the points about Swift and SwiftUI in theory, I just don't see the results in practice. Also or especially with Apple's first party applications.
On Apple platforms, I've had a lot of success in a hybrid model where the "bones" of the app are imperative AppKit/UIKit and declarative SwiftUI is used where it's a good fit, which gives you the benefits of both wherever they're needed and as well as an escape hatch for otherwise unavoidable contortions. Swift's nature as something of a hodgepodge enables this.
Are there any reference counting optimizations like biased counting? One big problem with Python multithreading is that atomic RCs are expensive, so you often don't get as much performance from multiple threads as you expect.
But in Swift it's possible to avoid atomics in most cases, I think?
And after all this "fucking approachable swift concurrency", at the end of the day, one still ends up with a program that can deadlock (because of resources waiting for each other) or exhaust available threads and deadlock.
Also, the overload of keywords and language syntax around this feature is mind blowing... and keywords change meaning depending on compiler flags so you can never know what a code snippet really does unless it's part of a project. None of the safeties promised by Swift 6 are worth the burnout that would come with trying to keep all this crap in one's mind.
Preventing deadlock wasn’t a goal of concurrency. Like all options - there are trade offs. You can still used gcd.
Yes they do. Just imagine seeing the following in a single file/function: Sendable, @unchecked Sendable, @Sendable, sending, and nonsending, @conccurent, async, @escaping, weak, Task, MainActor.
For comparison, Rust has 59 keywords in total. Swift has 203 (?!), Elixir has 15, Go has 25, Python has 38.
> You can still used gcd.
Not if you want to use anything of concurrency, because they're not made to work together.
It dilutes any point you were trying to make if you don’t actually delineate between what’s a keyword and a type.
Swift does indeed have a lot of keywords [1], but neither Task or MainActor are among them.
[1]: https://github.com/swiftlang/swift-syntax/blob/main/CodeGene...