This isn't really true. Your mock inplementation can embed the interface, but only implement the one required method. Calling the unimplemented methods will panic, but that's not unreasonable for mocks.
That is:
type mockS3 struct {
S3Client
}
func (m mockS3) PutObject(...) {
...
}
You don't have to implement all the other methods.Defining a zillion interfaces, all the permutations of methods in use, makes it hard to cone up with good names, and thus hard to read.
Lots of such frequently-quoted Go "principles" are invalid and are regularly broken within the standard library and many popular Go projects. And if you point them out, you will be snootily advised by the Go gurus on /r/golang or even here on HN that every principle has exceptions. (Even if there are tens of thousands of such exceptions).
Sounds much better than the interface boilerplate if it's just for the sake of testing.
"makes it hard to cone up with good names" is not really a problem, if you have a `CreateRequest` method you name the interface `RequestCreator`. If you have a request CRUD interface, it's probably a `RequestRepository`.
The benefits outweigh the drawbacks 10 to one. The most rewarding thing about this pattern is how easy it is to split up large implementations, and _keep_ them small.
// embedded S3Client not properly initialized
mock := mockS3{}
// somewhere inside the business logic
s3.UploadReport(...) // surprise
Go is flexible, you can define a complete interface at producer and consumers still can use their own interface only with required methods if they want.Go used to specifically warn against the overuse of this pattern in its teaching documentation, but let me offer an alternative so I’m not just complaining: Just write functions where the logic is clear to the reader. You’ll thank yourself in 6 months when you’re chasing down a bug
Interfaces are not precious. Why would anyone care what their name is? Their actual purpose is to wrap a set of behaviors under a single umbrella. Who cares what the color of the umbrella is? It's locally defined (near the function where the behaviors are used). Before passing an object, just make sure that it has the required methods and you're done. You don't have to be creative about what you name an interface. It does a thing? Call it "ThingDoer".
Also, why would you care to know which code implements a particular interface? It's equivalent to asking give me a list of all types that have this exact set of behavior? I'm possibly being myopic, but I've never considered this of particular importance, at least not as important as being conservative about the behavior you require from dependencies. Having types enumerate all the interfaces they implement is the old school approach (e.g. Java). Go's approach is closer to true Interface Segration. It's done downstream. Just patch the dependency with missing methods. No need to patch the type signature up with needless "implements this, that, other" declarations, which can only create the side-effect that to patch a type from some distant library, you'd have to inherit just so that you can locally declare that you also implement an additional interface. I don't know about you, but to the idea of never having to deal with inheritance in my code ever again I say "good riddance".
Again, interface segregation is about the behavior, not the name. The exact same combination of methods could be defined under a hundred different umbrellas, it would still not matter. If a dependency has the methods, it's good to go.
Not sure how you hold those two things in your head at the same time, but they are anathema to each other. Different implementations of the same function name and type signature can have drastically different effects in go because side effects are everywhere in go, so you must read the implementation to understand what it does.
If this was Haskell and I could read the type signature and trust that I know what the system did with that (ignoring Rich Hickey’s point about the type signature not describing what “reverse” does) then fine, but in every function call there are unconstrained numbers of side effects, go functions which persist after the function goes out of scope and can modify any pointed to memory at any arbitrary time later… go is the Wild West in this regard. The interface method name + weak go type system function definition is not enough to tell a developer what the implementation of that interface actually does. Finally: Java’s “implements” plus excellent IDE support for Java DI allows a developer to jump to the implementation in one keyboard press, this does not exist in go. You’ll probably never know what method is actually called unless it’s runtime with the actual config on the server.
I’m not going to explain the whole reasoned argument about why it’s important for a programmer to understand program execution flow in their head clearly, Dijkstra did a much better job than I ever could with GOTO considered harmful, but check out a modern descendant of this article specifically talking about go functions, and try to internalize the point about being able to understand program execution flow:
https://vorpus.org/blog/notes-on-structured-concurrency-or-g...
The points I was trying to draw your attention to was that duck-typing as it's done in Go (structural typing to be more exact), is at the crux of its approach to interfaces. Do you understand duck typing or structural typing?
To summarize what I've already tried to say before, Go interfaces are not Java interfaces. Java cares about the interface as a type, while Go cares only about the listed methods (the behavior). These are two completely different paradigms, they don't mix so well, as former Java programmers doing Go are discovering. In Java, interfaces themselves are important because they're treated as types and programmers tend to be economical about them. They're used everywhere to say that a class implements them and that they're a dependency of some consumer. In Go the interface is just a convenient but unimportant label that points to a list of methods that a consumer expects a particular dependency to have. Only the list of methods is meaningful. The label isn't. That's it. Done.
Again, completely different paradigms. If you embrace Go interfaces, the way you read, write and think about Go code also changes. But complaining about them with a Java mindset is complaining that a wrench is a bad screwdriver.
At the end of the day, it's up to you to decide whether you can open your mind and actually learn to use a tool as it was meant, or just assume that its creator and all the people that claim to do so successfully are in denial for not admitting to share the pains you have.
You’re saying, essentially, that you just use the objects method and you don’t need to read its implementation to understand what it does, if something has a “Update” method and it takes a new copy of the object and it’s a pointer method, as a caller we can assume that we give the new data to that update method and it’ll just take that data and patch it into the object at that pointer address. You don’t have to read the method and go on with your day, the interface proves it works and you don’t need “implements”
The problem with this is there are bad programmers who do crazy things with “Update”. Some people will kick off go routines that do things later and mutate things you don’t expect. So when you fetch 500 things then update them in a loop and suddenly nuke your database with 50,000 writes simultaneously that are all contending for the same key, you will go back to Update and see… oh fuck this update method uses some clever Postgres queue and keeps a lot of metrics that are also in Postgres, that’s why my system locked when it shouldn’t have. I should have read this Update method.
So that is crux of my point. Having the method name and function parameters only is not enough to understand your program, and using single interface definitions all over the place hurts readability and understanding.
No, more like static typed duck typing, or more accurately its close cousin structural typing.
But if you need to hear it from the horse's mouth https://research.swtch.com/interfaces
> You’re saying, essentially, that you just use the objects method and you don’t need to read its implementation to understand what it does
Your recent points have nothing to do with mine. What you're thinking that I'm saying is not what I'm saying. I'm still very much aligned with the original topic of the article, Interface Segregation as it's done in Go. An article to which you reacted adversely, while demonstrating that you're clearly still looking for Java where it doesn't exist. I'll leave things at that, since I'd just be repeating myself at this point.
I think you’d have to have seen the problem, and maybe it only pops up in a large go monorepo that uses DI the IDE can’t see through, while simultaneously having to work with interfaces like “Get” where some of the implementation was written by devs who think Get is a mutation.
You’re right though. We’re talking past each other and both think the other one is a moron. Leave it to the reader to figure out which one is I suppose.
A few well defined interfaces have the advantage of being easy to understand and see usages around the codebase without the overhead of many different variants of an interface. This is extremely important if you are not familiar with a given codebase.
I'm not against segregated interfaces, but I feel like over abstracting can result in code that's harder to understand. There's a balance to be had and thought should go into introducing new interfaces, especially when working on a project with many other devs contributing.
I'm a Java dev, so I'm biased. I love being to easily understand and reason about the type system. I understand that an interface is about a set of behaviors, but when I've worked with Go code I've found it much more difficult to get my IDE to point out all the different ways some interface could be implemented. I see the advantages that Go style interfaces bring, but I personally find it harder to keep a mental model when working with Go.
I actually addressed the root cause of the main point: a misunderstanding of the purpose of interfaces in Go. To me these complaints are analogous to someone saying that they're not able to move fast enough while trying to run underwater. Why don't you try swimming? The fact that whenever a complainer elaborates a bit, it often points to indications that they might be looking for Java in Go, also leads me to connect the original difficulty to the latter misunderstanding.
> A few well defined interfaces have the advantage of being easy to understand and see usages around the codebase without the overhead of many different variants of an interface.
Interfaces in Go are not a thing. They're a notice from a consumer to say "I'll be using these methods on whatever object you pass in this slot". Not much more. They're closely tied to the consumer (or a closely related set of consumers, if you don't want to be too zealous). It's a different mental model, but if you embrace it, it changes the way you write, read, and think about code.
> I've found it much more difficult to get my IDE to point out all the different ways some interface could be implemented.
Implemented? Forget that word completely. Ask instead "does the object I'm about to send have all the required methods?" If not, add the missing ones. That's it. It's all about the methods. Forget the interface itself, it's a label on a piece of napkin, a tag, to list the set of methods required by the consumer on a particular dependency.
I think Python duck-typing philosophy is a much better access door to Go's interfaces than Java interfaces. You just care about how a dependency will be used by its consumer. Now, if as a language designer you wanted to add the discipline of static typing on top of duck-typing, the logical conclusion would be either a syntax for "anonymous" interfaces that lets you duck-type
func Consumer(obj interface{doThis(str), doThat(int)}) {
obj.doThis('foo');
obj.doThat(123);
}
or the less ad-hoc style we've come to know from Go.For example, if you only use S3, it is premature abstraction to accept an interface for something that may not be S3. Just accept the S3 client itself as input.
Then the S3 client can be designed to be testable by itself by having the lowest-level dependencies (ie, network calls) stubbed out. For example, it can take a fake implementation that has hard-coded S3 URLs mapped to blobs. Everything that tests code with S3 simply has to pre-populate a list of URLs and blobs they need for the test, which itself can be centralized or distributed as necessary depending on the way the code is organized.
Generally, I/O is great level to use an interface and to stub out. Network, disk, etc. Then if you have good dependency injection practicies, it becomes fairly easy to use real structs in testing and to avoid interfaces purely for testing.
Related reading from the Google style guide, but focused specifically on the transport layer: https://google.github.io/styleguide/go/best-practices.html#u...
So I don't see dependency injection with interfaces as being premature abstractions. You're simply explicitly specifying the API your code depends on, instead of depending on a concrete type of which you might only use one or two methods. I think this is a good pattern to follow in general, with no practical drawbacks.
The reality of development is we have to merge different design philosophies into one code base. Things can get messy. 100% agreed.
The approach I advocate for is more for a) organizing the code you do own, and b) designing in a way that you play nice with others who may import your code.
First, why would you ever add methods to a public interface? Second, the next version of the Backup's implementation might very well want to call Load as well (e.g. for deduplication purposes) and then you suddenly need to add more methods to your fakes anyhow.
In the end, it really depends on who owns FileStorage and Backup: if it's the same team/person, the ISP is immaterial. If they are different, then yes, the owner of Backup() would be better served by declaring a Storage interface of their own and delegate the job of writing adapters that make e.g. FileStorage to conform to it to the users of Backup() method.
In the go world, it's a little more acceptable to do that versus something like Java because you're really not going to break anything
For a public interface, you have to track down all the clients, which may be infeasible, especially in an open ecosystem.
Instead of
type Saver interface {
Save(data []byte) error
}
You could have type saves func([]byte) error
Seems less bulky than an interface, more concise to mock too.It's more effort when you need to "promote" the port / input type to a full interface, but I think that's a reasonable tradeoff to avoid callers of your function constantly creating structs just to hang methods off
I still believe in Go it is better to _start_ with interfaces on the consumer and focus on "what you need" with interfaces instead of "what you provide" since there's no "implements" concept.
I get the mock argument all the time for having producer interfaces and I don't deny at a certain scale it makes sense but I don't understand why so many people reach for it out of the gate.
I'm genuinely curious if you have felt the pain from interfaces on the producer that would go away if there were just (multiple?) concrete types in use or if you happen to have a notion of OO in Go that is hard to let go of?
So much this. I think Go's interfaces are widely misunderstood. Often times when they're complained about, it boils down to "<old OO language> did interface this way. Why Go won't abide?" There's insistence in turning them into cherished pets. Vastly more treasured than they ought to be in Go, a meaningless thin paper wrapper that says "I require these behaviors".
This is the answer. The domain that exports the API should also provide a high fidelity test double that is a fake/in memory implementation (not a mock!) that all internal downstream consumers can use.
New method on the interface (or behavioral change to existing methods)? Update the fake in the same change (you have to, otherwise the fake won't meet the interface and uses won't compile!), and your build system can run all tests that use it.
Not a mock? But that's exactly what a mock is: An implementation that isn't authentic, but that doesn't try to deceive. In other words, something that behaves just like the "real thing" (to the extent that matters), but is not authentically the "real thing". Hence the name.
What I've seen:
* "test double" - a catch-all term for "not the real thing". What you called a "mock". But this phrasing is more general so the term "mock" can be used elsewhere.
* "fake" - a simplified implementation, complex enough to mimic real behavior. It probably uses a lot of the real thing under the hood, but with unnecessary testing-related features removed. ie: a real database that only runs in memory.
* "stub" - a very thin shim that only provides look-up style responses. Basically a map of which inputs produce which outputs.
* "mock" - an object that has expectations about how it is to be used. It encodes some test logic itself.
The Go ecosystem seems to prefer avoiding test objects that encode expectations about how they are used and the community uses the term "mock" specifically to refer to that. This is why you hear "don't use mocks in Go". It refers to a specific type of test double.
By these definitions, OP was referring to a "fake". And I agree with OP that there is much benefit to providing canonical test fakes, so long as you don't lock users into only using your test fake because it will fall short of someone's needs at some point.
Unfortunately there's no authoritative source for these terms (that I'm aware of), so there's always arguing about what exactly words mean.
Martin Fowler's definitions are closely aligned with the Go community I'm familiar with: https://martinfowler.com/articles/mocksArentStubs.html
Wikipedia has chosen to cite him as well: https://en.wikipedia.org/wiki/Test_double#General .
My best guess is that software development co-opted the term "mock" from the vocabulary of other fields, and the folks who were into formalities used the term for a more specific definition, but the software dev discipline doesn't follow much formal vocabulary and a healthy portion of devs intuitively use the term "mock" generically. (I myself was in the field for years before I encountered any formal vocabulary on the topic.)
Something doesn't add up. Your link claims that mock originated from XP/TDD, but mock as you describe here violates the core principles of TDD. It also doesn't fit the general definition of mock, whereas what you described originally does.
Beck seemed to describe a mock as something that:
1. Imitates the real object.
2. Records how it is used.
3. Allows you to assert expectations on it.
#2 and #3 sound much like what is sometimes referred to as a "spy". This does not speak to the test logic being in the object itself. But spies do not satisfy #1. So it is seems clear that what Beck was thinking of is more like, say, an in-memory database implementation where it:
1. Behaves like a storage-backed database.
2. Records changes in state. (e.g. update record)
3. Allows you to make assertions on that change in state. (e.g. fetch record and assert it has changed)
I'm quite sure Fowler's got it wrong here. He admits to being wrong about it before, so the odds are that he still is. The compounding evidence is not in his favour.
Certainly if anyone used what you call a mock in their code you'd mock (as in make fun of) them for doing so. It is not a good idea. But I'm not sure that equates to the pattern itself also being called a mock.
Either you copypaste the same interface over and over and over, with the maintenance nightmare that is, or you always have these struct-and-interface pairs, where it's unclear why there is an interface to begin with. If the answer is testing, maybe that's the wrong question ti begin with.
So, I would rather have duck typing (the structural kind, not just interfaces) for easy testing. I wonder if it would technically be possible to only compile with duck typing in test, in a hypothetical language.
Not exactly the same thing, but you can use build tags to compile with a different implementation for a concrete type while under test.
Sounds like a serious case of overthinking it, though. The places where you will justifiably swap implementations during testing are also places where you will justifiably want to be able to swap implementations in general. That's what interfaces are there for.
If you cannot find any reason why you'd benefit from a second implementation outside of the testing scenario, you won't need it while under test either. In that case, learn how to test properly and use the single implementation you already have under all scenarios.
I don't get this. Just because I want to mock something doesn't mean I really need different implementations. That was my point: if I could just duck-type-swap it in a test, it would be so much easier than 1. create an interface that just repeats all methods, and then 2. need to use some mock generation tool.
If I don't mock it, then my tests become integration test behemoths. Which have their use too, but it's bad if you can't write simple unit tests anymore.
There are no consistent definitions found in the world of testing, but I assume integration here means entry into some kind of third-party system that you don't have immediate control over? That seems to be how it is most commonly used. And that's exactly one of the places you'd benefit from enabling multiple implementations, even if testing wasn't in the picture. There are many reasons why you don't want to couple your application to these integrations. The benefits found under test are a manifestation of the very same, not some unique situation.
What for?
It's generally faster than a build (no linking steps), regardless of the number of things to generate, because it loads types just once and generates everything needed from that. Wildly better than the go:generate based ones.
mockery v3 does not do this. it type-checks just once for ALL mocks, regardless of the number, so it essentially does not grow slower as you create more mocks (since type checking is usually FAR slower than producing the mock).
1: https://github.com/maxbrunsfeld/counterfeiter/blob/000b82ca1...
I don't even want to think about the global or runtime rewriting that is possible (common) in Java and JavaScript as a reasonable solution to this DI problem.
This uses reflect and is nominally checked at run time, but over time more and more I am distinguishing between a runtime check that runs arbitrarily often over the execution of a program, and one that runs in an init phase. I have a command-line option on the main executable that runs the initialization without actually starting any services up, so even though it's a run-time panic if a service misregisters itself, it's caught at commit time in my pre-commit hook. (I am also moving towards worrying less about what is necessarily caught at "compile time" and what is caught at commit time, which opens up some possibilities in any language.)
The central service module also defines some convenient one-method interfaces that the services can use, so one service may look like:
type myDependencies interface {
services.UsesDB
services.UsesLogging
}
func init() {
services.Register(func(in myDependencies) error {
// init here
}
}
and another may have type myDependencies interface {
services.UsesLogging
services.UsesCaching
services.UsesWebCrawler
}
// func init() { etc. }
and in this way, each services declaring its own dependencies means each service's test cases only need to worry about what it actually uses, and the interfaces don't pollute anything else. This fully decouples "the set of services I'm providing from my modules" from "the services each module requires", and while I don't get compile-time checking that a module's service requirements are satisfied, I can easily get commit-time checking.I also have some default fakes that things can use, but they're not necessary. They're just one convenient implementation for testing if you need them.
DI frameworks, when they're not gigantic monstrosities like in Java, are pretty great.
I've been operating up to this point without this structure in a fairly similar manner, and it has worked fine in the tens-of-thousands-of-lines range. I can see maybe another order or two up I'd need more structure, but people really badly underestimate the costs of these massive frameworks, IMHO, and also often fail to understand that the value proposition of these frameworks often just boils down to something that could fit comfortably in the aforementioned 20-30 lines.
most of the stuff I've done has involved at least 20-30 libraries, many of which have other dependencies and config, so it's on the order of hundreds or thousands of lines if written by hand. it's totally worth a (simple) DI tool at that point.
In NET, one would simply mock one or two methods required by the implementation under the test. If I'm using Moq, then one would set it up in strict mode, to avoid surprises if unit under test starts calling something it didn't before.
This isn't really an OO pattern, as the rest of the post demonstrates. It's just a pattern that applies across most any language where you can make a distinction between an interface/typeclass or whatever, and a concrete type.
This is the essence of OOP.
"The notion of an interface is what truly characterizes objects - not classes, not inheritance, not mutable state. Read William Cook's classic essay for a deep discussion on this." - Gilad Bracha
https://blog.bracha.org/primordialsoup.html?snapshot=Amplefo...
Objects, but not OO. OO takes the concept further — what it calls message passing — which allows an object to dynamically respond to messages at runtime, even where the message does not conform to any known interface.
Dynamic typing is a necessary precondition for OO[1], but that is not what defines it. Javascript, for example, has objects and is dynamically typed, but is not OO. If I call object.random_gibberish in Javascript, the object will never know. The runtime will blow up before it ever finds out. Whereas in an OO language the object will receive a message containing "random_gibberish" and it can decide what it do with it.
[1] Objective-C demonstrated that you can include static-typing in a partial, somewhat hacky way, but there is no way to avoid dynamic-typing completely.
Go channels share some basic conceptual ideas with message passing, but they don't go far enough to bear any direct resemblance to OO; most notably they are not tied to objects in any way.
func Backup(saver func(data []byte) error, data []byte) error {
return saver(data)
}In Python, that would be a Protocol (https://typing.python.org/en/latest/spec/protocol.html), which is a newer and leas commonly used feature than full, un-annotated duck typing.
Sure, type checking in Python (Protocols or not) is done very differently and less strongly than in Go, but the semantic pattern of interface segregation seems to be equivalently possible in both languages—and very different from duck typing.
Either way, the thing folks are contrasting with here is nominal typing of interfaces, where a type explicitly declares which interfaces it implements. In Go it’s “if it quacks like a duck, it’s a duck”, just statically checked.
In Go it is compile time and Python it is runtime, but it is similar.
In Python (often) you don't care about the type of v just that it implements v.write() and in an interface based separation of API concerns you declare that v.write() is provided by the interface.
The aim is the same, duck typing or interfaces. And the outcome benefits are the same, at runtime or compile time.
However my point is more from a SOLID perspective duck typing and minimal dependency interfaces sort of achieve similar ends... Minimal dependency and assumption by calling code.
Except you need a typed variable that implements the interface or you need to cast an any into an interface type. If the "any" type implemented all interfaces then it would be duck typing, but since the language enforces types at the call level, it is not.