Programming Principles: A Summary

I previously wrote about a important concept that I always use when I think about design that can be found here. In this article we’ll explore a summary of the most essential programming principles and practices that significantly influence the approach and processes in the field of software development. These principles, ranging from fundamental programming concepts to more nuanced approaches to problem-solving, serve as key guidelines for developers in crafting robust, efficient, and maintainable software.

Important Programming Principles

  1. Don’t Repeat Yourself: The principle of ‘Don’t Repeat Yourself‘ (DRY), originally presented by Andy Hunt and Dave Thomas in “The Pragmatic Programmer: From Journeyman to Master,” advocates for reducing redundancy in code to a single, authoritative representation within a system. By avoiding code duplication, you not only streamline your coding process, but also simplify future maintenance and modifications, as changes or bug fixes need to be made in one place only. However, adhering strictly to the DRY principle without considering the ‘Rule of Three‘ – which suggests waiting until you have repeated something three times before abstracting it – can be counterproductive. An overzealous application of DRY can lead to premature optimization and unnecessary complexity. Therefore, it’s crucial to balance these principles to ensure efficient, maintainable code.
  2. Keep It Simple Stupid (KISS): When you’re coding high-level modules, remember to keep your code simple and clear. This makes it easier for other developers to modify your code, so each method should address only one problem.
  3. SOLID: This acronym stands for five fundamental principles of object-oriented programming and design: Single Responsibility PrincipleOpen/Closed PrincipleLiskov Substitution PrincipleInterface Segregation Principle, and Dependency Inversion Principle.
  4. You Aren’t Gonna Need It (YAGNI): Borrowed from Extreme Programming (XP), YAGNI advises against adding unnecessary features or codes that may not be useful in the near future. Such codes should be removed during refactoring to save time, effort, and cost. Instead of implementing features in anticipation of future needs, focus on developing features as and when they’re needed. Trying to plan for every possibility can lead to unnecessary complexity in your software.
  5. Practice consistency: this is arguably the overarching principle of all clean code principles. If you decide to do something a certain way, stick to it throughout the entire project. If you have no choice but to move away from your original choice, explain why in the comments.
  6. Open/Closed Principle suggests that once a class of code is tested and approved, it should be closed for modification but open for extension.
  7. Liskov Substitution Principle (LSP) recommends that objects of a superclass should be interchangeable with objects of its subclasses.
  8. Interface Segregation Principle (ISP) states that each class should have its own isolated interface, and dependencies should be based on the smallest possible interface.
  9. Dependency Inversion Principle (DIP) means that high-level modules shouldn’t depend on low-level modules, but both should depend on abstractions.
  10. Avoid Tunnel Vision: Consider all possible options and evaluate the potential impact on all aspects of the project.
  11. Avoid Reinventing the Wheel: Make use of existing solutions, libraries, and APIs to accelerate your project.
  12. Graceful Degradation: Ensure your software can handle errors without abruptly shutting down, protecting user data.
  13. Program to an interface, not an implementation: The concept means that when programming, one should focus on defining the interfaces (or protocols or contracts) that objects should adhere to, rather than focusing on the specific details of how those objects achieve their behavior. For example, instead of writing a function that specifically sorts an array of integers, you might write a function that sorts any list of objects that can be compared to each other. The function doesn’t care about the implementation details of the objects it’s sorting, only that they have a way to compare themselves to each other. This is programming to an interface (the requirement that objects be comparable) rather than an implementation (a specific way of representing data). The benefit of this approach is that it makes code more flexible and modular. If a function is written to work with any comparable objects, then it can be reused with any new class of objects that can be compared. It’s not tied to a specific class or implementation. This promotes decoupling, which makes the code easier to test, maintain, and extend in the future.
  14. Separation of Concerns (SoC) is a cornerstone in my development approach. SoC involves breaking down a program into distinct, independent parts, each tackling a specific task. For example, when creating a web application that allows user registration and login, I ensure these functionalities are handled by separate modules. This way, any changes in user registration won’t interfere with the login functionality, and vice versa. This approach makes the code easier to maintain and allows team members to work on different sections concurrently, boosting development efficiency.

Doing the Simplest Thing That Could Possibly Work

The principle of “Doing the Simplest Thing That Could Possibly Work” is another central tenet of Extreme Programming (XP) and Agile methodologies, and it works hand-in-hand with the YAGNI principle. The idea is to start with the simplest possible design that solves the current problem, and then incrementally improve the design as new requirements come in.

Here’s why this principle is important:

  1. Reduces Complexity: The simplest solution is usually the easiest to understand and the least prone to bugs. It doesn’t contain any unnecessary parts, so there’s less that can go wrong.
  2. Focuses on the Problem at Hand: By focusing on solving the immediate problem, we avoid over-engineering and premature optimization. This leads to faster delivery and less wasted effort.
  3. Promotes Iterative Development: The simplest solution can be delivered quickly, allowing for early feedback and iterative improvement. If the solution turns out to be inadequate, it can be improved in the next iteration. This is in contrast to a “big bang” approach, where we spend a long time developing a complex solution and don’t get any feedback until it’s complete.
  4. Facilitates Change: Simple designs are usually more flexible and easier to change than complex ones. This is crucial in today’s rapidly changing business environment, where the ability to adapt to new requirements can be a competitive advantage.
  5. Boosts Confidence and Momentum: Getting something simple working quickly can provide a big morale boost to the development team. It creates a sense of progress and momentum, which can be very motivating.

This principle doesn’t mean that you should always pick the easiest solution, or that you should ignore potential problems that might arise in the future. It’s about starting with a simple solution and then incrementally refining it, rather than trying to come up with a perfect solution upfront. As with the YAGNI principle, the key is to balance the desire for simplicity with the need for functionality and robustness.

Divide and Conquer (Decomposition)

“Divide and Conquer,” also known as “Decomposition.” It is a key strategy in computer science and software development due to the following reasons:

  1. Manageability: Large problems can be overwhelming and complex. By breaking them down into smaller pieces, they become more manageable and easier to understand.
  2. Parallel Development: Once the problem is broken down, different components can be developed in parallel, potentially reducing the development time. This is particularly relevant in large teams where tasks can be divided among multiple developers or sub-teams.
  3. Testing and Debugging: Smaller, individual components of a larger system are simpler to test and debug. Once each part is working correctly, they can be combined and tested together.
  4. Modularity and Reusability: Breaking a problem into smaller parts often leads to more modular solutions. These modules can sometimes be reused in different parts of the application or even in different projects, leading to less redundancy and improved productivity.
  5. Easier Maintenance: It’s generally easier to maintain and modify a system that’s been broken down into smaller, independent components. Changes in one part of the system are less likely to cause problems in other parts of the system.
  6. Improved Understanding and Communication: Smaller problems are easier to explain and understand, which can improve communication within the team, especially when new members join or when handing over the project.

The ability to break down complex problems into smaller parts is a critical skill in software development, and it’s one of the key techniques that developers use to handle the complexity of large software systems.

Readability (Clarity over brevity in code)

The principle of readability underlines the fact that code isn’t just for machines; it’s also a form of communication between humans. Therefore, readability is often more important than being overly concise or clever. Code is read by humans more often than it’s written or read by machines, so it should be optimized for human understanding. Following this principle have these benefits:

  1. Maintainability: Code is read much more often than it’s written. When the code is clear and easy to read, it’s easier for others (or yourself in the future) to maintain, debug, and enhance.
  2. Collaboration: In team environments, many different developers might work on the same codebase. Readability ensures that everyone can understand the code, not just the original author.
  3. Reducing Errors: Code that’s easy to read is easier to reason about, reducing the likelihood of errors and making them easier to spot when they do occur.
  4. Efficiency: It’s more efficient to work with readable code. Developers spend less time trying to understand what the code does and more time adding value.
  5. Knowledge Transfer: When new team members join or when handing over a project, readable code significantly simplifies the learning curve.

Code for the Maintainer

“Code for the Maintainer” is a principle that encourages developers to write their code in such a way that it’s easily understood and maintained by others. This is crucial for several reasons:

  1. Ease of Understanding: The person who will maintain your code in the future might not be you. It could be another developer on your team, or it could be someone entirely new to the project. If the code is not clear and understandable, it can be very difficult for them to figure out what it does and how it works.
  2. Long-term Efficiency: While it might be faster to write quick-and-dirty code in the short term, it’s often more time-consuming in the long term. If the code is hard to understand, it will take longer to fix bugs, add features, or make other changes in the future.
  3. Prevents Bugs: Well-structured, clear code is less likely to contain bugs, and when bugs do occur, they are easier to track down and fix. Confusing code can often hide subtle bugs that are hard to find and fix.
  4. Promotes Collaboration: When the code is written with the maintainer in mind, it becomes easier for teams to work together on the same codebase. Different team members can more easily understand and build upon each other’s work.
  5. Professionalism and Craftsmanship: Taking the time to write clear, maintainable code shows respect for your fellow developers and for the craft of software development. It’s a sign of professionalism and maturity as a developer.

Key practices for coding for the maintainer include:

  • Commenting your code where necessary to explain why certain decisions were made or to clarify complex sections of code. However, good code should largely speak for itself and be “self-documenting” wherever possible.
  • Following established coding conventions and standards, which makes your code more predictable and easier to understand.
  • Keeping your code DRY (Don’t Repeat Yourself) to avoid redundancy and make your code easier to maintain and modify.
  • Using meaningful names for variables, functions, and classes to make your code more readable and understandable.
  • Breaking down complex functions or classes into smaller, more manageable parts. This makes the code easier to understand and test.

Avoiding Premature Optimization

“Avoiding Premature Optimization” is a principle in software development that suggests that one should not prioritize optimization during the initial stages of coding. This idea was popularized by Donald Knuth, a prominent computer scientist, who stated, “Premature optimization is the root of all evil.”

This principle is important for several reasons:

  1. Efficiency of Development: Optimization often involves making the code more complex and harder to read or modify. If you optimize too early, you may end up wasting time on making parts of your code faster that don’t actually need to be, which could have been better spent on adding features or fixing bugs.
  2. Simplifies Debugging: Optimized code can be more difficult to debug. By keeping the code straightforward and easy to understand in the early stages, you make it easier to track down and fix any bugs that occur.
  3. Prioritizes Correctness: It’s more important to make the code correct than to make it fast. By avoiding premature optimization, you can focus on making sure your code is working properly before you start trying to speed it up.
  4. Avoids Assumptions: Without profiling or measuring, it’s often difficult to know where the performance bottlenecks in a program really are. Premature optimization often involves making assumptions about what parts of the code need to be faster, and these assumptions are often wrong.
  5. Keeps Code Maintainable: Early optimization can make the code harder to read and understand, making it more difficult to maintain in the long term. By avoiding premature optimization, you can keep your code simpler and more maintainable.

It’s important to note that this principle doesn’t mean you should ignore performance considerations entirely. It just means you should optimize at the right time, which is usually after the code is working correctly and you’ve identified performance bottlenecks through profiling or other measurements. And, of course, if you’re working in a performance-critical context (like game development or high-frequency trading), you might need to think about performance earlier in the process. But for most software development, avoiding premature optimization is a useful principle to follow.

Boy Scout Rule

The “Boy Scout Rule” in software development refers to the practice of leaving the code cleaner than you found it. This principle, popularized by Robert C. Martin in his book “Clean Code”, suggests that every time you work on a piece of code, you should strive to improve it, even if only in small ways.

The Boy Scout Rule is important in the context of software development for several reasons:

  1. Code Quality: Over time, as more and more developers work on a piece of code, if each one leaves it a bit cleaner than they found it, the overall quality of the code improves. This makes the code easier to understand, maintain, and less prone to bugs.
  2. Long-Term Efficiency: Although it may take a little more time upfront to clean up and improve the code you’re working on, this investment can pay off in the long run by reducing the time it takes to add new features or fix bugs in the future.
  3. Continuous Improvement: Following the Boy Scout Rule encourages a mindset of continuous improvement. It’s a way of acknowledging that no code is perfect and that there’s always something we can do to make it better.
  4. Teamwork and Collaboration: When everyone on the team follows the Boy Scout Rule, it shows respect for each other’s work and creates a positive, collaborative culture.
  5. Learning Opportunity: Refactoring and cleaning up the code provides an excellent opportunity for developers to learn from the existing code, improve their coding skills, and better understand the system.

Remember, the goal isn’t to refactor large chunks of the system randomly. The idea is to make small, incremental improvements in the code that you’re already working on. This could be as simple as renaming a poorly named variable, breaking a long function into smaller, more manageable parts, or adding comments to clarify a complex piece of code. As with all software development principles, it’s important to balance the desire to clean up the code with the need to deliver working software.

Fail Fast

“Fail fast” is a software development philosophy that suggests it is better to fail early and visibly than to fail later in a hidden or obscure manner. This principle is important for several reasons:

  1. Error Detection: By following the fail-fast principle, potential issues and bugs in the system are exposed early. This allows developers to identify and correct errors during the development stage rather than in production, which can lead to significant time and cost savings.
  2. Prevent Data Corruption: In systems where accuracy of data is critical, failing fast upon encountering an error helps prevent the system from propagating incorrect data throughout the system, potentially corrupting the dataset or leading to erroneous results.
  3. Enhanced Debugging: When a system fails fast, it often means that the stack trace is more relevant to the actual issue that caused the failure, which can make debugging easier. This is because the error occurs close to where the root cause of the problem lies.
  4. Reduced Risk: Early detection of problems helps reduce project risk. It’s easier and cheaper to fix problems early in development, and early failures can reveal design flaws or incorrect assumptions that, if left unchecked, could cause major issues later on.
  5. Improved System Reliability: Over time, the fail-fast approach helps improve the overall quality and reliability of the software. Since issues are caught and resolved promptly, the software becomes more robust and dependable.

Note that the fail-fast principle is not about promoting failure, but about creating transparency when things do go wrong, so you can learn from mistakes, correct them, and move forward.

Occam’s Razor

Occam’s Razor is a philosophical and scientific principle which posits that the simplest explanation (or solution), among many that account for all the known facts, is likely to be the correct one. When applied to the field of software development, it suggests that developers should strive for simplicity in design, coding, and processes.

Here’s how it can be applied to software development and why it’s important:

  1. Simplicity in Design: Occam’s Razor encourages simplicity in software design. The simplest design that meets the requirements is often the best choice. A complex design with many interconnected parts is harder to understand, harder to maintain, and more prone to errors.
  2. Code Simplicity: The principle encourages writing code that is straightforward and easy to read. This makes it easier to debug, maintain, and extend. Unnecessarily complex code can hide bugs and make the software harder to understand for other developers.
  3. Testing: Simple code with fewer branches is easier to test. Each condition or branch in the code needs to be tested, so fewer branches mean less testing is required. This can lead to fewer bugs and more reliable software.
  4. Performance Optimizations: Performance optimizations should only be added when necessary, as they can often make the code more complex and harder to maintain. A simpler, slower algorithm might be a better choice if it’s fast enough for the task at hand and results in simpler, more maintainable code.
  5. Process Simplicity: Simple processes are easier to follow and less error-prone than complex ones. Occam’s Razor encourages reducing unnecessary steps in development processes.

Law of Demeter

The Law of Demeter, also known as the principle of least knowledge, is a design guideline for developing software, particularly within object-oriented development environments. The principle is named after the Demeter Project at Northeastern University in Boston, where it was first proposed in 1987.

In a nutshell, the Law of Demeter states that an object should only communicate with its immediate neighbors and should have limited knowledge about the structure or properties of other objects. It discourages “chained” method calls like objectA.objectB.method() because, in this case, objectA needs to know about objectB and its methods.

Here’s a more formal statement of the Law of Demeter for object-oriented code:

  • A method M of an object O can invoke the methods of the following kinds of objects:
    1. O itself
    2. M’s parameters
    3. Any objects created/instantiated within M
    4. O’s direct component objects
    5. A global variable, accessible by O, in the scope of M

In other words, the code that interacts with a given object should not reach into that object and interact with its internals. Instead, any necessary operations on an object’s internals should be encapsulated within the object’s own methods.

This principle helps to reduce dependencies among classes or objects, thus making the code more maintainable, easier to modify, and simpler to understand. However, as with any design principle, it’s not an absolute rule, and there are times when it might make sense to violate it in the interest of readability or performance.

This law is important because of the following reasons:

  1. Increased Maintainability: By following the Law of Demeter, each unit of the software knows only about its closely-related counterparts. This makes each component less likely to break when changes are made elsewhere in the codebase. It decreases the coupling between components, making them easier to maintain and modify independently of one another.
  2. Improved Modularity: It encourages developers to create more self-contained, independent modules or classes. Each class should only have knowledge of and interaction with closely-associated classes, which leads to a more modular and thus more easily understandable and changeable system.
  3. Easier Testing: When objects are less dependent on the deep structures of other objects, they become easier to unit test. You won’t have to instantiate a bunch of other objects to test a single unit of your software.
  4. Reduction of System Complexity: Systems designed with this principle are less complex since objects are less likely to reach into other objects, manipulating their internals. This makes the system easier to comprehend and less error-prone.
  5. Encapsulation: This law encourages better data encapsulation. When an object’s properties are not directly accessed by other objects but only through its own methods, changes to that object’s implementation will have less effect on the overall system.

Conway’s Law

Conway’s Law, named after computer programmer Melvin Conway who introduced the concept in 1968, states that: “Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization’s communication structure.”

This law has profound implications in the context of software development:

  1. System Design and Team Structure: Conway’s Law posits that the structure of systems designed by an organization is likely to mirror the structure of the organization itself. For instance, if a software system is designed by several loosely connected teams, it’s likely the system will reflect this in its modular structure. Understanding Conway’s Law can guide architectural decisions based on team structures or guide team reorganization based on desired architecture.
  2. Communication and Collaboration: The law highlights the importance of effective communication and collaboration within a team or between teams. If teams are siloed, the software could suffer from integration issues, duplicated efforts, or inconsistencies.
  3. Organizational Changes: The law also provides insight into why changes in software systems might require changes in team structure. If an organization wants to shift from a monolithic system to a microservices architecture, they might need to restructure their teams to align with this goal.
  4. Team Scalability: If an organization is growing, awareness of Conway’s Law can guide the scaling of teams in a way that supports efficient system design. Instead of just growing one team indefinitely, it may be more effective to create new teams with defined areas of responsibility in the system.
  5. Cross-Functional Teams: It encourages the creation of cross-functional teams. If your software requires a diverse range of skills (UX, back-end, front-end, QA, etc.), organizing your teams to reflect this diversity can lead to a more coherent and effective design.

Recognizing the implications of Conway’s Law can help organizations anticipate problems in software development processes and create structures that lead to more efficient and effective designs.

Brooks’s Law

The law posits: “Adding human resources to a late software project makes it later.” This principle is based on the following observations:

  1. Ramp-Up Time: New team members need time to familiarize themselves with the project, which often involves training and assistance from existing team members. This can reduce the productivity of the existing team.
  2. Communication Overhead: As more people are added to a project, the complexity of communication increases exponentially. Everyone needs to stay informed and coordinated, which can result in a significant time investment.
  3. Task Partitioning: Not all tasks can be efficiently divided among team members. Some tasks may require specific skills or knowledge, while others may have dependencies that prevent them from being effectively divided.
  4. Loss of Cohesion: Adding more people can lead to a dilution of shared vision and understanding of the project, which can further lead to inconsistencies and miscommunication.

Understanding Brooks’s Law is crucial for project managers and team leaders. It stresses that “throwing more people at the problem” is not always a valid solution, especially for late projects. Instead, it encourages a focus on careful planning, coordination, and effective use of existing resources.

However, this principle isn’t absolute. There are certain situations where adding people to a project can accelerate it, such as when the team is understaffed to begin with, or when additional members can take over certain tasks that aren’t directly related to the project’s core development, like testing or documentation.

The Project Management Triangle (AKA Triple Constraint, Project Triangle)

The Project Management Triangle, also known as the Triple Constraint, Iron Triangle, or Project Triangle, is a model often used in project management, including in software development. It illustrates the constraints of the project and represents the three main aspects that are in a constant tension with each other:

  1. Scope: This represents the features and requirements of the project. Any change in the scope can affect time and cost.
  2. Time: This represents the schedule or timeline of the project. The duration it will take to complete the project can affect the scope and cost.
  3. Cost: This represents the budget of the project. The amount of money available and allocated can affect the scope and the time.

These three aspects are typically represented as the sides of a triangle, where one cannot be changed without affecting the others.

Why It’s Useful in Software Development:

  1. Balancing Priorities: It helps stakeholders and the project team understand the trade-offs between scope, time, and cost. For example, adding features (scope) will likely increase the cost and time needed to complete the project.
  2. Managing Expectations: By understanding the constraints of the project, it can be easier to set realistic expectations with stakeholders, preventing miscommunication or disappointment later in the project lifecycle.
  3. Decision Making: The triangle provides a clear visual representation of the constraints of a project, aiding in decision-making. If you’re running over budget, you may need to cut scope or extend timelines.
  4. Risk Management: Recognizing the constraints allows for identifying and mitigating risks early in the project, which is vital in software development where changes can become costly if not handled early on.
  5. Quality Considerations: While not explicitly represented in the traditional triangle, quality is sometimes considered as being in the center of the triangle, affected by all three constraints. By understanding how scope, time, and cost can impact quality, teams can strive for the right balance.
  6. Agile Framework Alignment: In agile methodologies common in software development, the understanding of these constraints is crucial. It helps in iterative planning and allows teams to adapt as the project evolves.
  7. Performance Measurement: By establishing clear constraints, it is easier to measure and track progress against these parameters, providing key insights and control over the project’s success.

Project Management Triangle is an important concept for software projects because it provides a valuable framework for understanding constraints, making informed decisions, managing expectations, and balancing priorities, contributing to the success of the project.

Continual Professional Development

It’s particularly important in software development for the following reasons:

  1. Rapid Technological Change: The technology industry evolves at an incredibly fast pace. New tools, languages, and practices are being developed all the time. By continually learning, you can stay abreast of these changes and maintain your relevance and effectiveness in the field.
  2. Improving Skills: Continuous learning allows you to improve your skills and knowledge over time, which can make you more efficient and effective as a developer. It can also open up new opportunities for you in terms of the types of projects you can work on.
  3. Career Advancement: In many cases, the more you learn, the more valuable you become to your employer or clients, which can lead to career advancement.
  4. Problem-Solving: The more you know, the more tools you have at your disposal when it comes to solving problems. This can make you a better problem-solver and a more valuable member of your team.
  5. Keeping Up with Industry Standards: As new practices and standards emerge, continuous learning helps ensure that your work adheres to the most current industry standards.

In the fast-paced field of software development, the ability to learn continuously is not just a bonus; it’s a necessity. By regularly updating your knowledge and skills, you can stay at the top of your game and ensure that you’re always bringing the best to your work.

Summary

This blog post provides a comprehensive overview of key principles for effective software development, covering concepts like code readability, maintainability, design patterns, and project management.

Here are the key takeaways:

Code Quality:

  • DRY (Don’t Repeat Yourself): Avoid duplicate code, promoting reusability and easier maintenance.
  • KISS (Keep It Simple Stupid): Write clear, concise code for readability and easier modification.
  • SOLID: Adhere to principles like Single Responsibility and Open/Closed for well-structured, flexible code.
  • Readability: Prioritize clarity over brevity, making code understandable for future developers.
  • Boy Scout Rule: Leave code cleaner than you found it, fostering continuous improvement.

Design and Efficiency:

  • YAGNI (You Aren’t Gonna Need It): Avoid unnecessary features and code, focusing on immediate needs.
  • Avoid Premature Optimization: Prioritize correctness and understanding over early optimization.
  • Fail Fast: Expose errors early during development to save time and effort.
  • Occam’s Razor: Aim for simplicity in design, code, and processes.
  • Law of Demeter: Minimize dependencies and interactions between objects for clarity and maintainability.

Software Development Practices:

  • Conway’s Law: Team structure impacts system design, favoring clear communication and alignment.
  • Brooks’s Law: Adding people to a late project can slow it down due to communication overhead.
  • Project Management Triangle: Balance scope, time, and cost constraints for successful project delivery.
  • Continual Learning: Stay updated with evolving technologies and industry standards.

Following these principles helps developers write clean, maintainable, and efficient code, fostering collaboration, quality, and successful software projects.

References and further readings

  1. The Pragmatic Programmer: From Journeyman to Master by Andrew Hunt and David Thomas: https://www.amazon.com/Pragmatic-Programmer-journey-mastery-Anniversary/dp/0135957052
  2. Clean Code: A Handbook of Agile Software Craftsmanship by Robert C. Martin: https://www.amazon.com/Clean-Code-Handbook-Software-Craftsmanship/dp/B09Y9XKBZR
  3. Extreme Programming Explained: Embrace Change by Kent Beck and Cynthia Andres: https://www.amazon.com/Extreme-Programming-Explained-Embrace-Change/dp/0201616416
  4. Design Patterns: Elements of Reusable Object-Oriented Software by Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides: https://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612
  5. Code Complete: A Practical Handbook of Software Construction by Steve McConnell: https://www.amazon.com/Code-Complete-Practical-Handbook-Construction/dp/0735619670
  6. Refactoring: Improving the Design of Existing Code by Martin Fowler: https://www.amazon.com/Refactoring-Improving-Existing-Addison-Wesley-Signature-ebook/dp/B07LCM8RG2
  7. You Don’t Know JS (book series) by Kyle Simpson: https://github.com/getify/You-Dont-Know-JS
  8. Effective Java by Joshua Bloch: https://www.amazon.com/Effective-Java-2nd-Joshua-Bloch/dp/0321356683
  9. Building Microservices: Designing Fine-Grained Systems by Sam Newman: https://www.amazon.com/Building-Microservices-Designing-Fine-Grained-Systems/dp/1492034029
  10. The Mythical Man-Month: Essays on Software Engineering by Frederick P. Brooks Jr.: https://www.amazon.com/Mythical-Man-Month-Anniversary-Software-Engineering-ebook/dp/B00B8USS14
  11. Object-Oriented Design Heuristics by Arthur J. Riel: https://www.amazon.com/Object-Oriented-Design-Heuristics-paperback-Arthur/dp/0321774965
  12. Code Simplicity: The Fundamentals of Software by Max Kanat-Alexander: https://www.amazon.com/Code-Simplicity-Fundamentals-Max-Kanat-Alexander/dp/1449313892
Share...
 

Hamid Mosalla

Hi, I'm Hamid Mosalla, I'm a software developer, indie cinema fan and a classical music aficionado. Here I write about my experiences mostly related to web development and .Net.

 

Leave a Reply

Your email address will not be published. Required fields are marked *