5.0. Design Patterns
What is a software design pattern?
Software design patterns are proven solutions to common problems encountered during software development. These patterns provide a template for how to solve a problem in a way that has been validated by other developers over time. The concept originated from architecture and was adapted to computer science to help developers design more efficient, maintainable, and reliable code. In essence, design patterns serve as blueprints for solving specific software design issues.
Why do you need software design patterns?
- Freedom of choice: In the realm of AI/ML, flexibility and adaptability are paramount. Design patterns enable solutions to remain versatile, allowing for the integration of various options and methodologies without locking into a single approach.
- Code Robustness: Python's dynamic nature demands discipline from developers to ensure code robustness. Design patterns provide a structured approach to coding that enhances stability and reliability.
- Developer productivity: Employing the right design patterns can significantly boost developer productivity. These patterns facilitate the exploration of diverse solutions, enabling developers to achieve more in less time and enhance the overall value of their projects.
While Python's flexibility is one of its strengths, it can also lead to challenges in maintaining code robustness and reliability. Design patterns help to mitigate these challenges by improving code quality and leveraging proven strategies to refine and enhance the codebase.
What are the top design patterns to know?
Design patterns are typically categorized into three types:
Strategy Pattern (Behavioral)
The Strategy pattern is crucial in MLOps for decoupling the objectives (what to do) from the methodologies (how to do it). For example, it allows for the interchange of different algorithms or frameworks (such as TensorFlow, XGBoost, or PyTorch) for model training without altering the underlying code structure. This pattern upholds the Open/Closed Principle, providing the flexibility needed to adapt to changing requirements, such as switching models or data sources based on runtime conditions.
Factory Pattern (Creational)
After establishing common interfaces, the Factory pattern plays a vital role in enabling runtime behavior modification of programs. It controls object creation, allowing for dynamic adjustments through external configurations. In MLOps, this translates to the ability to alter AI/ML pipeline settings without code modifications. Python's dynamic features, combined with utilities like Pydantic, facilitate the implementation of the Factory pattern by simplifying user input validation and object instantiation.
Adapter Pattern (Structural)
The Adapter pattern is indispensable in MLOps due to the diversity of standards and interfaces in the field. It provides a means to integrate various external components, such as training and inference systems across different platforms (e.g., Databricks and Kubernetes), by bridging incompatible interfaces. This ensures seamless integration and the generalization of external components, allowing for smooth communication and operation between disparate systems.
How can you define software interfaces with Python?
Python supports two primary methods for defining interfaces: Abstract Base Classes (ABC) and Protocols.
ABCs utilize Nominal Typing to establish clear class hierarchies and relationships, such as a RandomForestModel being a subtype of a Model. This approach makes the connection between classes explicit:
from abc import ABC, abstractmethod
import pandas as pd
class Model(ABC):
@abstractmethod
def fit(self, X: pd.DataFrame, y: pd.DataFrame) -> None:
pass
@abstractmethod
def predict(self, X: pd.DataFrame) -> pd.DataFrame:
pass
class RandomForestModel(Model):
def fit(self, X: pd.DataFrame, y: pd.DataFrame) -> None:
print("Fitting RandomForestModel...")
def predict(self, X: pd.DataFrame) -> pd.DataFrame:
print("Predicting with RandomForestModel...")
return pd.DataFrame()
class SVMModel(Model):
def fit(self, X: pd.DataFrame, y: pd.DataFrame) -> None:
print("Fitting SVMModel...")
def predict(self, X: pd.DataFrame) -> pd.DataFrame:
print("Predicting with SVMModel...")
return pd.DataFrame()
Conversely, Protocols adhere to the Structural Typing principle, embodying Python's duck typing philosophy where a class is considered compatible if it implements certain methods, regardless of its place in the class hierarchy. This means a RandomForestModel is recognized as a Model by merely implementing the expected behaviors.
from typing import Protocol, runtime_checkable
import pandas as pd
@runtime_checkable
class Model(Protocol):
def fit(self, X: pd.DataFrame, y: pd.DataFrame) -> None:
...
def predict(self, X: pd.DataFrame) -> pd.DataFrame:
...
class RandomForestModel:
def fit(self, X: pd.DataFrame, y: pd.DataFrame) -> None:
print("Fitting RandomForestModel...")
def predict(self, X: pd.DataFrame) -> pd.DataFrame:
print("Predicting with RandomForestModel...")
return pd.DataFrame()
class SVMModel:
def fit(self, X: pd.DataFrame, y: pd.DataFrame) -> None:
print("Fitting SVMModel...")
def predict(self, X: pd.DataFrame) -> pd.DataFrame:
print("Predicting with SVMModel...")
return pd.DataFrame()
Choosing between ABCs and Protocols depends on your project's needs. ABCs offer a more explicit, structured approach suitable for applications, while Protocols offer flexibility and are more aligned with library development.
How can you better validate and instantiate your objects?
Pydantic is a valuable tool for defining, validating, and instantiating objects according to specified requirements. It utilizes type annotations to ensure inputs meet predefined criteria, significantly reducing the risk of errors in data-driven operations, such as in MLOps processes.
Validating Objects with Pydantic
Pydantic utilizes Python's type hints to validate data, ensuring that the objects you create adhere to your specifications from the get-go. This feature is particularly valuable in MLOps, where data integrity is crucial for the success of machine learning models. Here's how you can leverage Pydantic for object validation:
from typing import Optional
from pydantic import BaseModel, Field
class RandomForestClassifierModel(BaseModel):
n_estimators: int = Field(default=100, gt=0)
max_depth: Optional[int] = Field(default=None, gt=0, allow_none=True)
random_state: Optional[int] = Field(default=None, gt=0, allow_none=True)
# Instantiate the model with validated parameters
model = RandomForestClassifierModel(n_estimators=120, max_depth=5, random_state=42)
In this example, Pydantic ensures that n_estimators
is greater than 0, max_depth
is either greater than 0 or None
, and similarly for random_state
. This kind of validation is essential for maintaining the integrity of your model training processes.
Streamlining Object Instantiation with Discriminated Union
Pydantic's Discriminated Union feature further simplifies object instantiation, allowing you to dynamically select a class based on a specific attribute (e.g., KIND
). This approach can serve as an efficient alternative to the traditional Factory pattern, reducing the need for boilerplate code:
from typing import Literal, Union
from pydantic import BaseModel, Field
class Model(BaseModel):
KIND: str
class RandomForestModel(Model):
KIND: Literal["RandomForest"]
n_estimators: int = 100
max_depth: int = 5
random_state: int = 42
class SVMModel(Model):
KIND: Literal["SVM"]
C: float = 1.0
kernel: str = "rbf"
degree: int = 3
# Define a Union of model configurations
ModelKind = RandomForestModel | SVMModel
class Job(BaseModel):
model: ModelKind = Field(..., discriminator="KIND")
# Initialize a job from configuration
config = {
"model": {
"KIND": "RandomForest",
"n_estimators": 100,
"max_depth": 5,
"random_state": 42,
}
}
job = Job.model_validate(config)
This pattern not only makes the instantiation of objects based on dynamic input straightforward but also ensures that each instantiated object is immediately validated against its respective schema, further enhancing the robustness of your application.
Incorporating these practices into your MLOps projects can significantly improve the reliability and maintainability of your code, ensuring that your machine learning pipelines are both efficient and error-resistant.