In this post, we will talk about handling large datasets in distributed systems. This is not related to big data or machine learning where you handle large data. But in general, when you start scaling a distributed system, you will start processing various transactional and reporting data. How do you handle that kind of large datasets? If you are a beginner with distributed systems, you can read fundamentals of distributed systems or building event-driven microservices.
Why handle large datasets?
The first question arises why we need to handle the large datasets. In practicality, there can be many reasons like migrating a large set of data after upgrading the legacy system, or processing historical data, or your existing system is growing with transactional data. All these scenarios come with complexities and scale. When designing any distributed system, one can always make decisions to take into account such scenarios. But despite that, every decision in a distributed system is a trade-off. Despite how well you are prepared, you might come across a scenario that you have not taken into account. How do you handle such cases?
Ways for handling large datasets
We have a large dataset. How do we process? This dataset can be for reporting or even auditing purposes.
Chunking
Chunking is one way of processing this data. We take a large dataset and splits that into multiple data chunks. And then process each chunk. As simple as that, the terminology goes – process data in n number of chunks.
In this scenario, you need to know the characteristic of data to split it into multiple chunks. There can be other side effects with chunking. Imagine if you read a gigabyte of data into memory and then tried to split. That would create performance issues. In such scenarios, you need to think about how one can read data from a data store or database in chunks. Probably use filters. Pagination is one such example.
MapReduce
MapReduce is a programming model where you take data and pass that data through a map and reduce functions.
Map takes a key/value pair of input and produces a sequence of key/value pairs. The data is sorted this way that it groups keys together. Reduce reduces the accepted values with the same key and produce a new key/value pair.
Streaming
Streaming can be considered similar to chunking. But with streaming, you don’t need to have any custom filters. Also, many programming languages offer streaming syntax to process a large dataset. We can process a large data set from a file or a database through the stream. Streaming data is a continuous flow of data generated by various sources. Overall, there are systems that can transmit data through event streams.
Streaming allows the processing of data in real-time. Applications like Kafka allow to send data and consume it immediately. The performance measure for streaming is latency.
Batch Processing
Batch processing allows processing a set of data in batches. So if you have 100k records, you can set a limit to process only 10K records in a batch. Spring Boot also offers an option of Spring batch for batch processing. Batch processes are scheduled jobs that run a set program to process a set of data and then produce output. The performance measure of batch processing is throughput.
Conclusion
In this post, we discussed different ways to process a large set of data. Do you know any other way? Please comment on this post and I will add it to this list.
With ever-growing distributed systems, one will need to handle large data sets eventually. It’s always good to revisit these methods to understand the fundamentals of data processing.
In this post, we will learn about Spring Cloud Function and will deploy an example of Spring Cloud Function on AWS Lambda. By end of this post, we will have more understanding of serverless functions. If you want to learn more about serverless architecture, this post will get you started.
What is Spring Cloud Function?
Spring Cloud Function is one of the features of Spring Cloud. It allows developers to write cloud-agnostic functions with Spring features. These functions can be stand-alone classes and one can easily deploy on any cloud platform to build a serverless framework. Spring Cloud offers a library spring-cloud-starter-function-web allows to build functions with Spring features and it brings all the necessary dependencies.
Why use Spring Cloud Function?
This question is more when to use Spring Cloud Function. Basically, Spring Cloud Function library allows to the creation of functional applications that can be deployed easily on AWS Lambda. These functions follow the Java 8 pattern of Supplier, Consumer, and Function.
spring-cloud-starter-function-web library provides native interaction for handling requests, streams.
Features of Spring Cloud Function
The major advantage of Spring Cloud Function is it provides all the features of Spring Boot like autoconfiguration, dependency injection. But there are more features:
Transparent type conversions of input and output
POJO Functions
REST Support to expose functions as HTTP endpoints
Streaming data to/from functions via Spring Cloud Stream framework
Deploying functions as isolated jar files
Adapter for AWS Lambda, Google Cloud Platform, Microsoft Azure
Demo
As part of this post, we will create Spring Cloud Function and deploy it in AWS Lambda. Once we create a regular spring boot application, add the following dependencies in your Gradle file:
Note the dependency spring-cloud-function-adapter-aws allows us to integrate Spring Cloud Function with AWS Lambda.
One main class for the application will look like below:
package com.betterjavacode.springcloudfunctiondemo;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.function.context.FunctionalSpringApplication;
@SpringBootApplication
public class SpringcloudfunctiondemoApplication {
public static void main(String[] args) {
FunctionalSpringApplication.run(SpringcloudfunctiondemoApplication.class, args);
}
}
Compare this to a regular Spring Boot application, there is one difference. We are using FunctionalSpringApplication as an entry point. This is a functional approach to writing beans and helps with startup time.
Now, we can write three types of functions Function, Consumer OR Supplier. We will see what each function does and how we can use as part of this demo.
Furthermore, let’s create a POJO model class Customer.
package com.betterjavacode.springcloudfunctiondemo.models;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.Id;
import javax.persistence.Table;
@Entity
@Table(name= "customer")
public class Customer
{
@Id
@GeneratedValue(generator = "UUID")
private Long id;
private String name;
private int customerIdentifier;
private String email;
private String contactPerson;
public Customer(String name, int customerIdentifier, String email, String contactPerson)
{
this.name = name;
this.customerIdentifier = customerIdentifier;
this.email = email;
this.contactPerson = contactPerson;
}
public String getName ()
{
return name;
}
public void setName (String name)
{
this.name = name;
}
public int getCustomerIdentifier ()
{
return customerIdentifier;
}
public void setCustomerIdentifier (int customerIdentifier)
{
this.customerIdentifier = customerIdentifier;
}
public String getEmail ()
{
return email;
}
public void setEmail (String email)
{
this.email = email;
}
public String getContactPerson ()
{
return contactPerson;
}
public void setContactPerson (String contactPerson)
{
this.contactPerson = contactPerson;
}
public Long getId ()
{
return id;
}
public void setId (Long id)
{
this.id = id;
}
}
Certainly, our spring cloud function will perform some business logic related to this model Customer.
Consumer Function
Let’s create a Consumer function. Consumer function usually takes an input and performs some business logic that will have a side-effect on the data. It will not produce any output. So it is more like a void method.
For our demo, it will look like below:
package com.betterjavacode.springcloudfunctiondemo.functions;
import com.betterjavacode.springcloudfunctiondemo.models.Customer;
import com.betterjavacode.springcloudfunctiondemo.repositories.CustomerRepository;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import java.util.Map;
import java.util.function.Consumer;
@Component
public class CustomerConsumer implements Consumer<Map<String, String>>
{
public static final Logger LOGGER = LoggerFactory.getLogger(CustomerConsumer.class);
@Autowired
private CustomerRepository customerRepository;
@Override
public void accept (Map<String, String> map)
{
LOGGER.info("Creating the customer", map);
Customer customer = new Customer(map.get("name"), Integer.parseInt(map.get(
"customerIdentifier")), map.get("email"), map.get("contactPerson"));
customerRepository.save(customer);
}
}
This CustomerConsumer function implements Consumer function type and takes an input of type Map<String, String>. As part of the interface contract, one needs to implement the method accept. This method will take map input and perform some business logic. One thing to understand is that Spring Cloud Function will handle type conversion from raw input stream and types declared by the function. If the function is not able to infer tye information, it will convert to a generic type of map.
This function takes a map of DTO object for the customer and saves it in the database. For the database, we are using H2 in-memory database. One can always add more business logic, but for demo purposes, we are showing a simple example.
Supplier Function
The supplier function acts like a GET endpoint. This function takes no input but returns data.
package com.betterjavacode.springcloudfunctiondemo.functions;
import com.betterjavacode.springcloudfunctiondemo.models.Customer;
import com.betterjavacode.springcloudfunctiondemo.repositories.CustomerRepository;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import java.util.List;
import java.util.function.Supplier;
@Component
public class CustomerSupplier implements Supplier
{
public static final Logger LOGGER = LoggerFactory.getLogger(CustomerSupplier.class);
@Autowired
private CustomerRepository customerRepository;
@Override
public Customer get ()
{
List customers = customerRepository.findAll();
LOGGER.info("Getting the customer of our choice - ", customers);
return customers.get(0);
}
}
Configuring Spring Cloud Function with AWS Lambda
One AWS Lambda will execute only one function. If there are multiple Spring Cloud Function beans, one can configure which function to execute through a lambda. Add the property in application.properties as follows:
spring.cloud.function.definition=customerConsumer
One can easily deploy a single jar file with AWS Lambda and use Spring Profiles to pass different functions in application.properties.
Building Shaded Jar
To deploy the application in AWS Lambda with Spring Cloud Function, you will need a shaded jar. To build this jar, we will use gradle shadow plugin. The build file will look like below:
Run the command ./gradlew clean build and it will build a shaded jar. An Uber Jar contains the contents of multiple jars from dependencies. A shaded jar provides a way of creating an uber jar and renaming the packages from Uber Jar. Now to deploy our jar in AWS Lambda, we have to make sure to include a dependency com.amazonaws:aws-lambda-java-core.
Creating an AWS Lambda in AWS
Regardless, let’s create an AWS Lambda in AWS.
Provide a descriptive name – SpringCloudFunctionDemo.
Upload the shaded jar.
Now update Runtime Settings in AWS Lambda to indicate how the lambda will invoke our function. Spring provides a classFunctionInvoker with generic method handleRequest as part of the library spring-cloud-function-aws-adapter.
Now if we run the AWS Lambda, we will see the execution of our consumer function. We will test our consumer function with a JSON data load:
AWS Lambda is a very powerful service to build a serverless framework. With the combination of Spring Cloud and AWS, one can leverage multiple features to build simpler services for handling complex business requirements. Here is another post about connecting Spring Boot application with AWS Dynamo DB.
In this post, I show how to use pub/sub pattern with the NodeJS application. We will use the Google Cloud Pub/Sub module for building this sample application.
What is Pub/Sub?
Most architectures used to be synchronous previously. But with the advent of microservices, asynchronous communication is an equal part of the design. Pub/Sub is one such model that allows asynchronous communication. Usually, in event-driven architecture, one service publishes an event and another service consumes that event.
A message broker plays a relay role when it comes to publishing and consuming the messages. Both Google Cloud(Pub-Sub) and AWS offer a service (SNS & SQS) that allows applications to use the Pub-Sub model. Another advantage of Pub/Sub is that it allows to set up a retry policy, covers idempotency. You can learn more about event-driven architecture here.
Push-Pull
In any pub-sub model, there are two patterns of implementation. One is Push and the other is Pull.
In Pull Model
The consumer sends a request to pull any messages.
Pub/Sub server responds with a message if there are any messages available and not previously consumed.
The consumer sends an acknowledgment.
In Push Model
The publisher publishes a message to Pub/Sub Server
The Pub/Sub server sends the message to the specified endpoint on the consumer side.
Once the consumer receives the messages, it sends an acknowledgment.
NodeJS Application
As part of the post, we will create a nodejs application that will use the pub-sub model. This application will send simple messages to Google Cloud pub/sub. We will have another consumer application that will consume this message.
Accordingly, before we write our application, let’s make sure you have installed gcloud emulator on your environment. First, install gcloud sdk depending on what OS you have.
Now initialize the gcloud on your environment and you will have to log in for this
gcloud init
Gcloud will ask a bunch of questions to choose a project and configure a cloud environment.
Now, we will install a pub-sub component emulator for gcloud on our local environment.
gcloud components install pubsub-emulator
Now to get started with pub-sub service, use the following command
This command will start the pubsub service on your machine at localhost:8085. Since we will have to use this service in our application, we will need to know where the service is located. So set two environment variables
PUBSUB_EMULATOR_HOST=localhost:8085
PUBSUB_PROJECT_ID=pubsubdemo
Publisher Application
In general, we have a Publisher application. This application checks if the topic exists in Pub-Sub service and if not, then creates that topic. Once it creates the topic, it sends the data through a message to Pub-Sub service topic.
The code for this application will look like below:
Basically, this consumer application verifies if the subscription exists, if not it creates a subscription against the topic where our publisher application is sending messages. Once the message arrives in pub-sub topic, the consumer application pulls that message. This application implements the PULL model of pub-sub.
Demo
On starting pubsub service emulator, we will see the log like below:
Now, let’s execute the publisher application and we will see a console log of message publishing
If you execute the same application, you will not see the message Topic PubSubExample created.
Now if execute the consumer application, we will pull the message that publisher sent to topic.
In this post, I showed how to use Pub Sub with NodeJS application. Pub-Sub is a powerful model to use in enterprise applications. It allows us to build services that can communicate asynchronously. If you have more questions about this topic, feel free to reach out to me.
Logging is a key part of enterprise applications. Logging not only helps to investigate a problem but also helps to build relevant metrics. These metrics are important from a business perspective. In fact, businesses write service level agreements (SLA) using these metrics. In this post, we will talk about logging in Spring Boot-based Microservices.
The Production Level applications can fail anytime for various reasons. For a developer to investigate such issues in a timely manner, it becomes critical to have logs available. Logs are a key for applications to recover.
The question comes what do we log? Developers, software architects invest enough time to decide what to log. It’s equally important to not log a lot of information. You do not want to lose critical information. Evidently, one should not log any of PII (Personal Identifiable Information). A paradigm that developers can use is “What will help me to investigate issues in the code if the application fails?”. Especially, if a critical business decision needs comment in the code, it is an equally viable option to log that decision.
Concurrently, one can use a randomly generated trace id in the logs to trace the request-response. The harder part is to maintain this idea throughout the application’s life.
Logging and Observability
Microservices communicate with external APIs, other microservices. Henceforth, it is important to log the details of such communication. In event-driven microservices, one can log details of events. With cloud infrastructure, it has become easier to log details from microservices. Cloud infrastructure like AWS offers CloudWatch to collect these logs and then use the ELK stack to monitor the logs. However, observability tools like New Relic, Sumo Logic connect with different cloud infrastructures. They collect logs and offer flexibility to display, query, and build metrics based on logs.
Accordingly, developers have enough tools to log the data from applications to improve traceability and debugging.
Logging In Spring Boot Based Microservices
Let’s look at the logging in a Spring Boot-based Microservice. We will create a simple microservice and show what kind of logging configuration we can use.
Our main class looks like below:
package com.betterjavacode.loggingdemo;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
@SpringBootApplication
public class LoggingdemoApplication {
public static void main(String[] args) {
SpringApplication.run(LoggingdemoApplication.class, args);
}
}
Basically, the microservice does not have anything at the moment. Regardless, we have a main class and we will see how the logging comes into the picture.
As part of the application, I have included a single dependency
This dependency also includes spring-boot-starter-logging . spring-boot-starter-logging is a default logging configuration that Spring Boot offers. We will look into more details.
Default Logging Configuration
spring-boot-stater-logging dependency includes slf4j as a logging facade with logback as logging framework.
SLF4J is a logging facade that a number of frameworks support. The advantage of using this facade is that we can switch from one framework to another easily. Logback is the default framework in any spring boot application, but we can switch to Log4j, Log4J2, or Java Util Logging easily.
spring-boot-starter-logging includes the required bridges that take logs from other dependencies and delegate them to the logging framework.
Logback Logging Configuration
Analogous to what we added as a microservice and the default logging, we will see how we can use logback logging configuration. If we do not provide any configuration, spring boot will use the default configuration for logback. It will append the logs to console with the log level as info. Logging frameworks help us to propagate the logs to different targets like consoles, files, databases, or even Kafka.
With the configuration file (logback-spring.xml), we can also set the pattern of messages. If you want to use log4j2 instead logback, you can read this post about logging and error handling.
The following configuration file shows how we will be logging:
We will dissect this file to understand what each line in the configuration does.
At first, we configured a property LOGDIRECTORY pointing to a physical directory on the machine where log files will be saved. We use this property in appender and rollingPolicy.
Different options for logging
Subsequently, we are using appender from Logback configuration to configure where we want to append our logs. In this case, we have configured for Console and File.
For ConsoleAppnder, we are using a pattern of messages that includes a date and time in black color, log level in blue, package in yellow color. The log message will be of the default color.
For RollingFileAppender, we have a line indicating what will be the file name and where it will be stored. In this case, we are logging in microservice.log in LOGDIRECTORY. The next line indicates the pattern for the log message.
%d – DateTime
%p – log-level pattern
%C – ClassName
%t – thread
%m – message
%n – line separator
Thereafter, we define RollingPolicy. We want to make sure that we do not log the information in a single file and that keeps growing in size. We trigger the rolling out of the log file after it reaches a file size of 5 MB and save the old file in the archive directory with a name microservice-date-number.log.
Moving on, we will discuss the log level in the next section.
Configuring Log Level
The last part of the configuration file indicates log level. At the root level, we are logging everything at INFO level. Basically, our application will log all those messages that are written with INFO log level in the code.
But the next configuration allows us to set the log level at the package. In the package starting with com.betterjavacode, log all those messages that are on DEBUG level.
Executing the Spring Boot Application
Now, we will look at how this will be in our demo microservice.
I have a simple RestController in my application that retrieves company information as below:
package com.betterjavacode.loggingdemo.controller;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import java.util.ArrayList;
import java.util.List;
@RestController
@RequestMapping("/v1/companies")
public class CompanyController
{
private static final Logger LOGGER = LoggerFactory.getLogger(CompanyController.class);
@GetMapping
public List getAllCompanies()
{
LOGGER.debug("Getting all companies");
List result = new ArrayList<>();
result.add("Google");
result.add("Alphabet");
result.add("SpaceX");
LOGGER.debug("Got all companies - ", result);
return result;
}
}
Now if execute our application and access the API http://localhost:8080/v1/companies/, we will get the list of companies, but also we will be able to view log on console as below:
The log file will look like below:
2021-12-04 18:20:32,221 INFO org.springframework.boot.StartupInfoLogger [main] Starting LoggingdemoApplication using Java 1.8.0_212 on YMALI2019 with PID 3560
2021-12-04 18:20:32,223 DEBUG org.springframework.boot.StartupInfoLogger [main] Running with Spring Boot v2.6.0, Spring v5.3.13
2021-12-04 18:20:32,224 INFO org.springframework.boot.SpringApplication [main] No active profile set, falling back to default profiles: default
2021-12-04 18:20:33,789 INFO org.springframework.boot.web.embedded.tomcat.TomcatWebServer [main] Tomcat initialized with port(s): 8080 (http)
2021-12-04 18:20:33,798 INFO org.apache.juli.logging.DirectJDKLog [main] Initializing ProtocolHandler ["http-nio-8080"]
2021-12-04 18:20:33,799 INFO org.apache.juli.logging.DirectJDKLog [main] Starting service [Tomcat]
2021-12-04 18:20:33,799 INFO org.apache.juli.logging.DirectJDKLog [main] Starting Servlet engine: [Apache Tomcat/9.0.55]
2021-12-04 18:20:33,875 INFO org.apache.juli.logging.DirectJDKLog [main] Initializing Spring embedded WebApplicationContext
2021-12-04 18:20:33,875 INFO org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext [main] Root WebApplicationContext: initialization completed in 1580 ms
2021-12-04 18:20:34,212 INFO org.apache.juli.logging.DirectJDKLog [main] Starting ProtocolHandler ["http-nio-8080"]
2021-12-04 18:20:34,230 INFO org.springframework.boot.web.embedded.tomcat.TomcatWebServer [main] Tomcat started on port(s): 8080 (http) with context path ''
2021-12-04 18:20:34,239 INFO org.springframework.boot.StartupInfoLogger [main] Started LoggingdemoApplication in 2.564 seconds (JVM running for 3.039)
2021-12-04 18:20:34,242 INFO com.betterjavacode.loggingdemo.LoggingdemoApplication [main] After starting the application.........
2021-12-04 18:20:39,526 INFO org.apache.juli.logging.DirectJDKLog [http-nio-8080-exec-1] Initializing Spring DispatcherServlet 'dispatcherServlet'
2021-12-04 18:20:39,526 INFO org.springframework.web.servlet.FrameworkServlet [http-nio-8080-exec-1] Initializing Servlet 'dispatcherServlet'
2021-12-04 18:20:39,527 INFO org.springframework.web.servlet.FrameworkServlet [http-nio-8080-exec-1] Completed initialization in 0 ms
2021-12-04 18:20:39,551 DEBUG com.betterjavacode.loggingdemo.controller.CompanyController [http-nio-8080-exec-1] Getting all companies
2021-12-04 18:20:39,551 DEBUG com.betterjavacode.loggingdemo.controller.CompanyController [http-nio-8080-exec-1] Got all companies - [Google, Alphabet, SpaceX]
Tracing the Requests
Previously, I stated why we log. When there are multiple microservices and each microservice is communicating with other and external APIs, it is important to have a way to trace the request. One of the ways is to configure a pattern in logback-spring.xml .
Another option is to use Filter and MDC (Mapping Diagnostic Context). Basically, each request coming to API will be intercepted through Filter. In Filter, you can add a unique id to MDC map. Use the logging pattern that uses the key from MDC map. This way, your request will have tracking information. One thing to remember is to clear the context from MDC once your API has responded to the client.
Configuring Logs for Monitoring
In the enterprise world, one way to configure logs is to store the logs in files and stash these files on a central location on a cloud server. AWS offers easier flexibility to pull this information in cloud watch from storage S3 and then one can use tools like Kibana and Elastic search for monitoring the logs and metrics.
Conclusion
In this post, we detailed how to use logging in spring boot-based microservices. We also discussed Logback configuration that one can use while using Logback framework in the Spring Boot application.
Most of these practices are standard and if followed properly, ensure the troubleshooting and monitoring of applications in a production environment.
Implementing Domain-Driven Design is a software design approach. How do you start to design any software? A complex problem can be overwhelming. Even if you want to look at the existing code base and figure out the design, it can be a lot of work. As you build, the distributed system can get complex. This post is part of Distributed System Design.
The domain-driven approach of software development works in sync with domain experts. Usually, one would discuss the problem with domain experts to figure out what domains and rules can be created and how they are changed in the application. Object-oriented design is nothing but domain-driven design. Domains are objects. Irrespective of what language you choose, you need to create domain objects.
Discussing with domain experts
A complex problem needs a discussion with domain experts. Once you have collected all the information about rules and conditions, you can start representing the real-life problem in a domain object. Rules and conditions can help on how to represent the domain and how the domain would interact with other domain objects.
The first step is to choose domain model names. Again, this depends on your domain.
If we have to take an example of a local library, we will have domain objects like Book, Author, User and Address. A user of the library borrows the book of a particular author from the library. Of course, you can also add Genre. Either you would talk to a local library and see what they need to build an online system to track their books inventory. This discussion will give you an idea about what your end-user wants and how the user wants to use the system.
Basic Building Blocks
Once we have enough information about the domain, we create basic building blocks. The reason to start with domain-driven design as basic building blocks is that this part of the design is least changeable. Over the course of the application lifespan, this will not change. That’s why it is important to build this as accurately as possible.
Entities
Entities are domain objects. We identify these domain objects uniquely. A general way to identify these objects is to create a field id which can be of type UUID.
As discussed in our example for building an online library system to manage books, Book, Author, Genre will be different entities and We will add a field id in each of these entities to identify them uniquely.
public class Book
{
private UUID id;
private String title;
private String isbn;
private Date created;
private Date updated;
}
Value Objects
Value objects are attributes or properties of entities. Like above where we created Book entity, title, isbn are value objects of this entity.
Repositories
Nevertheless, repositories are an intermediate layer between Services that need to access domain object data from the persistence technologies like a database.
Aggregates
Aggregates are a collection of entities. This collection is bound together by a root entity. Entities within aggregates have a local identity, but outside that border, they have no identity.
Services
Services drive the domain-driven design. They are the doer of the entire system. All your business logic relies on services. When you get a request to fetch or insert data, services carry out the validation of rules and data with the help of entities, repositories, and aggregates.
Factories
How do you create aggregates? Usually, factories provide help in creating aggregates. If an aggregate is simple enough, one can use aggregate’s constructor to create aggregate.
Repositories – retrieval, search, deletion of aggregates.
Understanding Bounded Context in Domain-Driven Design
The challenging part of any design is how to make sure that our efforts are not duplicated. If you create feature A with certain domains and another feature B that includes part of domains from feature A, then we are duplicating the efforts. That’s why it’s important to understand the bounded context when designing your application architecture. In such cases, one can easily use Liskov Substitution Principle.
The more features you add, the more complex the design gets. With increasing models, it is even harder to guess what you will need in the future. Overall, in such scenarios, you build a bounded context. The bounded context contains unrelated models but also shares the common models. This helps in dividing the complex and large models which can share some models.
Conclusion
In this post, we learned how to use domain-driven design to build a scalable system. If you want to learn more about this topic, you can get this course on domain-driven design.