When is Single Responsibility Principle helping us write better code?

How many responsibilities should a class have? As many as it needs! This is a common joke, but the reality often is far less amusing… How often do we encounter… “challenging” code annotated with “Do not change!!!” comments? Software development isn’t rocket science. There are a few basic rules that most developers know. Clearly defining responsibility of a class seems to be one of them. So, what leads to the creation of such code? Is it just a lack of time / tight deadlines? Perhaps, but could it be actually more efficient to add new responsibilities to existing components rather than extracting them into separate ones? At least in some instances?

Alternatively, consider the Single Responsibility Principle (SRP), a well-known tenet of S.O.L.I.D. It proposes that each software component, be it a class or data structure, should have one distinct responsibility. Essentially, one reason to exist, one reason to change. If our software consists of such components, it is likely to be more robust, easier to maintain, and less bug-prone. However, is this principle applicable to all types of software? Is the SRP truly an “unbreakable rule“? Let’s take a look and find out!

One principle to rule them all - When is Single Responsibility Principle helping us write better code?

It looks good but…

Imagine a typical, all-in-one network controller. Its responsibilities are diverse, ranging from composing requests and checking network connection status, to handling errors and decoding responses. I wager we’ve all built a similar component at some point of our careers, right? Let’s take a look at some code:

struct NetworkClient {
    static let shared = NetworkClient()

    private init() {}

    func performRequest<T: Decodable>(with request: URLRequest) async throws -> T {
        guard let url = request.url else {
            throw NetworkError.invalidURL
        }

        do {
            let (data, _) = try await URLSession.shared.data(from: url, delegate: nil)
            let decodedObject = try JSONDecoder().decode(T.self, from: data)
            return decodedObject
        } catch {
            throw NetworkError.requestFailed(error)
        }
    }
}

At first glance, the above code looks fine. It’s quite concise, readable and accomplishes its intended function. And that’s what matters, right? However, the answer may not be so straightforward. The longevity of the app plays a crucial role in determining if we’ve chosen the right implementation. If the app is merely a proof of concept (POC) or a single-use app (e.g.: a showcase for a company event), then the current implementation should suffice.

However, in most scenarios, we work with different types of applications. Many Series A tech startups struggle to break even in the first 2 years of running, so you might reasonably expect the app to be developed (or at least maintained) for that long. This implies that the networking component you created may have to undergo changes.

So, how easy would it be to implement these changes? Much of it depends on how you answer the following questions:

  • What would you do if you had to connect to different web services (with different URLs)?
  • How would you manage user authentication for different types of requests
  • How can you implement the requested functionalities without causing any breaking changes to the app?

Naturally, you might opt to create multiple versions of the networking client. Each one specialized in handling different types of requests. Or maybe pass a configuration object with the request you want to send? This approach could work… temporarily. Unfortunately, as your code complexity increases, it becomes progressively challenging to prevent regression while implementing changes requested by the Business. And it would be tricky to write a comprehensive unit test suite to combat possible regressions.

How about many components with single responsibilities?

For contrast, let’s take a look at the following networking component. Although the code is slightly less readable as it uses traditional callbacks, it’s more than adequate to demonstrate the proper separation of concerns:

final class LiveNetworkModule: NetworkModule {
    private let requestBuilder: RequestBuilder
    private let urlSession: NetworkSession
    private let actions: [NetworkModuleAction]
    private let completionExecutor: AsynchronousOperationsExecutor
    
    ...

    @discardableResult public func perform(request: NetworkRequest, completion: ...) -> URLSessionTask? {
        // (1)
        guard let urlRequest = requestBuilder.build(request: request) else {
            execute(completionCallback: completion, result: .failure(.requestParsingFailed))
            return nil
        }

        return perform(urlRequest: urlRequest, withOptionalContext: request, completion: completion)
    }

    @discardableResult func perform(urlRequest: URLRequest, completion: ...) -> URLSessionTask {
        var urlRequest = urlRequest
        // (2)
        actions.forEach { action in
            action.performBeforeExecutingNetworkRequest(request: request, urlRequest: &urlRequest)
        }

        // (3)
        let task = urlSession.dataTask(with: urlRequest) { [weak self] data, response, error in
            if let error = error as NSError? {
                self?.handle(error: error, completion: completion)
            } else if let response = response as? HTTPURLResponse {
                self?.handle(
                    response: response, data: data, request: networkRequest, completion: completion)
            } else {
                // (4)
                self?.execute(
                    completionCallback: completion, result: .failure(NetworkError.unknown))
            }
        }
        task.resume()

        return task
    }
    
    ...
}

As you can see, the deliverables are essentially the same as in the previous all-in-one component. However, the execution differs significantly. In this case, each operation that needs to be performed is handled by a dedicated dependency:

  • A RequestBuilder converts a request description into a URLRequest (1).
  • NetworkModuleAction(s) enhance the URLRequest with additional data, such as appending a request header with a field containing an access token (2).
  • A NetworkSession, essentially a wrapper for URLSession, executes a request and provides a response (3).
  • A CompletionExecutor ensures the response is provided on the main thread (4).

All of these are specialized components, each responsible for delivering a single, well-defined result. It may seem a bit limiting at first glance, but, as we all know, one can have responsibilities and still enjoy life to the fullest…

Ok, but how does this design help us tackle the change requests coming from business? Let’s take a look:

  • Q: What would you do if you had to connect to multiple web services (multiple base URLs)?
    A: We could create a dedicated RequestBuilder to handle requests calling a particular service.
  • Q: How would you manage authentication for different types of requests?
    A: Typically, authentication is achieved by adding a header field that includes an access token. Our initial approach could be to develop a NetworkAction that retrieves an access token from in-memory storage and applies it to the URLRequest that’s about to be sent. We might create similar implementations for each type of authentication we need to manage in the app.
  • Q: How can you implement the requested functionalities without causing any breaking changes to the app?
    A: When a component is thoroughly covered by unit tests, we can significantly mitigate the risk of regressions. Any disruptive changes we might inadvertently introduce should be immediately picked up by our suite of tests.
    And how to implement these tests? Given that all dependencies are injectable, we can construct mocks/fakes and use them to take control of the testing environment. Of course, these dependencies should also be fully testable. For more information, take a look at my blog post about testing networking components in particular.

It’s clear that managing change is much simpler when your app consists of multiple, highly specialized, and interchangeable components. And in software engineering, change is not only highly probable, it’s the only certainty. Arguably, the thing that sets apart a well-written software and a poorly-written one is the ability to adapt to change more easily.

Reusability and testability of a single-responsibility code.

Applying the Single Responsibility Principle to your codebase does more than just make the application more adaptable to change. It also promotes code reusability. Just like in real life, simpler tools often have more use cases.

Let’s consider one such simple tool: the KeychainStorage we discussed in the blog post about abstractions. Its only job is to enable storing data in the Keychain:

protocol LocalStorage {
    // (1)
    func setValue<T: Encodable>(_ value: T, forKey key: String) async throws
    func getValue<T: Decodable>(forKey key: String) async throws -> T?
    func removeValue(forKey key: String) async throws
}

// (2)
final class KeychainStorage: LocalStorage {
    private let keychain: KeychainWrapper

    init(keychain: KeychainWrapper = .genericKeychain) {
        self.keychain = keychain
    }

    @MainActor func setValue<T: Encodable>(_ value: T, forKey key: String) async throws {
        guard let encoded = try? JSONEncoder().encode(value) else {
            throw StorageError.unableToEncodeData
        }
        do {
            try keychain.set(encoded, key: key)
        } catch {
            throw StorageError.dataStorageError
        }
    }

    ...

}

The API is straightforward, allowing you to store, retrieve, and remove Codable chunks of data identified by specific keys (1). If an app service requires access to certain data, we inject it with an instance of the KeychainStorage, rather than directly accessing the Keychain. Think of it like a “Lego” piece. The majority of them are simple, easy to connect, and replaceable. Think about how much effort it would take to assemble a racing car if all the pieces were in highly specific shapes and sizes.

Wrapping the KeychainStorage in the LocalStorage protocol (2) provides additional flexibility in case we need to migrate our storage, e.g. to the biometric Keychain. You can find more information on this topic in the blog post about abstractions.

In essence, reusing code reduces uncertainty about how your application operates. This is particularly true if that code is thoroughly covered by unit tests:

final class KeychainStorageTest: XCTestCase {
    var fakeKeychain: FakeKeychain!
    var sut: KeychainStorage!

    override func setUp() {
        fakeKeychain = FakeKeychain()
        sut = KeychainStorage(keychain: fakeKeychain)
    }

    func test_whenStoringValue_shouldEndoceItAndStoreUnderProvidedKey() async {
        //  given:
        let fixtureKey = "fixtureKey"
        let fixtureValue = "fixtureValue"

        //  when:
        try? await sut.setValue(fixtureValue, forKey: fixtureKey)

        //  then:
        let fixtureEncodedValue = try? JSONEncoder().encode(fixtureValue)
        XCTAssertEqual(fakeKeychain.lastSetValue, fixtureEncodedValue, "Should encode value and store it")
        XCTAssertEqual(fakeKeychain.lastSetValueKey, fixtureKey, "Should use proper key")
    }
    
    ...
    
}

where:

final class FakeKeychain: KeychainWrapper {
    var simulatedStorage: [String: Data]?
    var simlatedStorageError: Error?
    private(set) var lastSetValue: Data?
    private(set) var lastSetValueKey: String?
    
    ...

    func getData(_ key: String, ignoringAttributeSynchronizable: Bool) throws -> Data? {
        simulatedStorage?[key]
    }

    public func set(_ value: Data, key: String, ignoringAttributeSynchronizable: Bool) throws {
        if let simlatedStorageError {
            throw simlatedStorageError
        } else {
            lastSetValue = value
            lastSetValueKey = key
        }
    }

    ...

}

Finally, composing complex objects from dependencies, each handling a single responsibility, significantly aids debugging. If a feature is faulty but relies on a storage component that works flawlessly across the app, it’s likely that the issue is not related to storage. Or the storage component is not configured the right way in this particular instance. Regardless, the more reusable “Lego” pieces you use to build your feature, the quicker you can isolate a problem.

When can the SRP be broken?

It’s not uncommon for us to break the S.O.L.I.D. rules, often without even realizing it. The question is: When is it acceptable to do so?

Consider the presented networking module. Along with orchestrating network calls, it also analyzes backend responses. Theoretically, this is a separate duty and should be delegated to a specific dependency. However, when implementing this functionality, I consciously chose not to do so. Why? To be practical. The module already has many dependencies. Adding another one, just to calculate the status of a network request, would be excessive. Nonetheless, I’ve placed the code that performs this verification in dedicated extensions:

extension LiveNetworkModule {
    fileprivate func handle(
        response: HTTPURLResponse,
        data: Data?,
        request: NetworkRequest?,
        completion: ((Result<NetworkResponse, NetworkError>) -> Void)?
    ) {
        if let networkError = response.toNetworkError() {
            execute(completionCallback: completion, result: .failure(networkError))
            return
        }

        let networkResponse =
            NetworkResponse(
                data: data, networkResponse: response)...execute(
                completionCallback: completion, result: .success(networkResponse))
    }
}

and:

public extension NetworkError {
    public init?(urlResponse: HTTPURLResponse) {
        let statusCode = urlResponse.statusCode
        if let error = statusCode.toNetworkError(message: HTTPURLResponse.localizedString(forStatusCode: statusCode)) {
            self = error
        } else {
            return nil
        }
    }
}

This appears to be a reasonable compromise. The code is clear, readable, and partially moved to dedicated extensions. These extensions can be also easily tested (and indeed they are). Of course, this approach does have the downside of violating the SRP, but…

Okay, we’ve discussed instances where common sense prevails over blindly following the Single Responsibility Principle. But, are there cases where breaking this rule can actually be beneficial? Indeed, there are.

  • An experimental code: A proof of concept, test code, or a part of a temporary feature, ideally hidden behind a remote feature flag. However, do ensure that this POC does not become a permanent app feature!
  • Part of a controlled technical debt: If you’re releasing an app in a hurry and have created a piece of code with the intention of replacing it later, make sure to mark the code with appropriate comments. Also, create and prioritise sprint tasks for refactoring to ensure the business doesn’t “forget” about that code!
  • Legacy code that almost never changes: If a piece of code works, is rarely used, and doesn’t require maintenance (e.g., a collection of low-level sound processing algorithms), it would be unnecessary to refactor it just for the sake of refactoring. However, ensure such code is well-separated from the rest of the application.
  • Sample / tutorial code: The best way to showcase your library or tool is by writing a simple code snippet demonstrating its use. Your primary goal should be to level the learning curve for potential users, not to display your understanding of S.O.L.I.D. principles.

Generally, a useful trick is to ask yourself: “Will this code hurt me in the future?”. Imagine you could fast-forward to a year from now. Assuming the rest of the app’s codebase evolved ideally, would the code you’re about to commit be a source of trouble? Only you can answer that question…

Summary

The Single Responsibility Principle is a cornerstone of S.O.L.I.D. and one of the most important rules in software development. Understanding and applying it will make your code simpler, more readable and able to stand the test of time. As an added benefit, implementing a comprehensive unit test suite becomes much easier for components driven by the SRP, further enhancing their reusability.

Like everything in life, the SRP should be used with moderation and common sense. It’s acceptable to disregard the rule if it simplifies our code, but this should be a conscious decision. We need to recognize when we’re about to break SRP and understand the potential consequences. Considering the networking module we’ve discussed, it manages dependencies orchestration and analyzes server responses. In theory, the latter should be delegated to a dedicated dependency, but would this significantly benefit the module? I highly doubt it.

Finally, we’ve also discussed scenarios where it may be more beneficial to ignore the Single Responsibility Principle (SRP) entirely. Whether it’s a test feature destined for removal after an experiment, or an old, unmaintained internal library, some code is simply not worth your effort. Focus your attention where it’s truly needed, choose your battles wisely, and maximize your time and talents.

Don’t miss any new blogposts!

    By subscribing, you agree with our privacy policy and our terms and conditions.

    Disclaimer: All memes used in the post were made for fun and educational purposes only 😎
    They were not meant to offend anyone 🙌
    All copyrights belong to their respective owners🧐
    The thumbnail was generated with text2image ❤️ 
    Some images were generated using Leonardo AI ❤️
    Audio version generated with TTS Maker ❤️