Author Archives: yogesh.mali@gmail.com

Connect Spring Boot Application with AWS Dynamo DB

In this post, I show how we can connect Spring Boot Application with AWS Dynamo DB. I will also cover some fundamentals of AWS Dynamo DB which is a No-SQL database.

AWS Dynamo DB

As per Amazon Documentation, Dynamo DB is No-SQL key-value and document database. We do have some alternatives like Cassandra (key-value) or Mongo DB (document).

Dynamo DB offers

  • reliable scaleable performance
  • a simple API for allowing key-value access

Dynamo DB is usually a great fit for application with the following requirements:

  1. A large amount of data and latency requirements
  2. Data sets for recommendation systems
  3. Serverless Application with AWS Lambda

Key Concepts

Before we can use Dynamo DB, it is important to understand some key concepts about this database.

  • Tables, Items, and Attributes – These three are the fundamental blocks of Dynamo DB. A table is a grouping of data records. An item is a single data record in a table. Henceforth, each item in a table is identified using the primary key. Attributes are pieces of data in a single item.
  • Dynamo DB tables are schemaless. However, we only need to define a primary key when creating the table. A simple primary key or a composite primary key are two types of primary keys.
  • Secondary Indexes – Sometimes primary keys are not enough to access data from the table. Secondary indexes enable additional access patterns from the Dynamo DB. Nevertheless, there are two types of indexes – local secondary indexes and global secondary indexes. A local secondary index uses the same partition key as the underlying table, but a different sort key. A global secondary index uses the different partition key and sort key from the underlying table.

Applications with Dynamo DB

There is one difference with Dynamo DB compared to other SQL or NoSQL databases. We can interact with Dynamo DB through REST calls. We do not need JDBC Connection protocols where applications need to maintain consistent connections.

There are two ways we can connect applications to Dynamo DB.

  1. Use Spring Data Library with Dynamo DB
  2. Use an AWS SDK provided client

Spring Boot Application

As part of this demo, we will create some data model classes that depict entity relationships. Subsequently, the application will provide a simple REST API for crud operation and the application will store the data in Dynamo DB.

So let’s start with adding the required dependencies in our application:


dependencies {
	implementation 'org.springframework.boot:spring-boot-starter-web'
	implementation 'io.github.boostchicken:spring-data-dynamodb:5.2.5'
	implementation 'junit:junit:4.13.1'
	testImplementation 'org.springframework.boot:spring-boot-starter-test'
}

So the dependency spring-data-dynamodb allows us to represent Dynamo DB tables in model classes and create repositories for those tables.

We will create our model class Company as follows:


package com.betterjavacode.dynamodbdemo.models;

import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBAttribute;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBAutoGeneratedKey;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBHashKey;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBTable;

@DynamoDBTable(tableName = "Company")
public class Company
{
    private String companyId;

    private String name;

    private String type;

    @DynamoDBHashKey(attributeName = "CompanyId")
    @DynamoDBAutoGeneratedKey
    public String getCompanyId ()
    {
        return companyId;
    }


    public void setCompanyId (String companyId)
    {
        this.companyId = companyId;
    }

    @DynamoDBAttribute(attributeName = "Name")
    public String getName ()
    {
        return name;
    }

    public void setName (String name)
    {
        this.name = name;
    }

    @DynamoDBAttribute(attributeName = "Type")
    public String getType ()
    {
        return type;
    }

    public void setType (String type)
    {
        this.type = type;
    }
}

So this class Company maps to the Dynamo DB table of the same name. The annotation DynamoDBTable helps us with this mapping. Similarly, DynamoDBHashKey is the attribute key of this table. DynamoDBAttribute are the other attributes of this table.

We will create a REST Controller and a Service class that will allow us to call the CRUD APIs for this object.


package com.betterjavacode.dynamodbdemo.controllers;

import com.betterjavacode.dynamodbdemo.models.Company;
import com.betterjavacode.dynamodbdemo.services.CompanyService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.*;

@RestController
@RequestMapping("v1/betterjavacode/companies")
public class CompanyController
{
    @Autowired
    private CompanyService companyService;

    @GetMapping(value = "/{id}", produces = "application/json")
    public ResponseEntity getCompany(@PathVariable("id") String id)
    {

        Company company = companyService.getCompany(id);

        if(company == null)
        {
            return new ResponseEntity<>(HttpStatus.BAD_REQUEST);
        }
        else
        {
            return new ResponseEntity<>(company, HttpStatus.OK);
        }
    }

    @PostMapping()
    public Company createCompany(@RequestBody Company company)
    {
        Company companyCreated = companyService.createCompany(company);

        return company;
    }
}



So we have two methods one to get the company data and another one to create the company.


package com.betterjavacode.dynamodbdemo.services;

import com.betterjavacode.dynamodbdemo.models.Company;
import com.betterjavacode.dynamodbdemo.repositories.CompanyRepository;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;

import java.util.List;
import java.util.Optional;

@Service
public class CompanyService
{
    @Autowired
    private CompanyRepository companyRepository;

    public Company createCompany(final Company company)
    {
        Company createdCompany = companyRepository.save(company);
        return createdCompany;
    }

    public List getAllCompanies()
    {
        return (List) companyRepository.findAll();
    }

    public Company getCompany(String companyId)
    {
        Optional companyOptional = companyRepository.findById(companyId);

        if(companyOptional.isPresent())
        {
            return companyOptional.get();
        }
        else
        {
            return null;
        }
    }
}

Connect Spring Boot Application with AWS Dynamo DB

So far, we have seen creating some parts of the application. But, we still have an important part left and that is to connect our application to AWS Dynamo DB service in AWS.

Login to AWS Console and access Dynamo DB.

Create a new table in Dynamo DB.

Spring Boot Connect AWS Dynamo DB - Table

Assuming, you choose the primary key as CompanyId, we should be fine here. Remember, that’s the partition key we defined in our model class.

Now back to the Spring Boot Application. Create a new bean ApplicationConfig to define Dynamo DB configuration.

 


package com.betterjavacode.dynamodbdemo.config;

import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.auth.AWSCredentialsProvider;
import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClientBuilder;
import org.socialsignin.spring.data.dynamodb.repository.config.EnableDynamoDBRepositories;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration
@EnableDynamoDBRepositories(basePackages = "com.betterjavacode.dynamodbdemo.repositories")
public class ApplicationConfig
{
    @Value("${amazon.aws.accesskey}")
    private String amazonAccessKey;

    @Value("${amazon.aws.secretkey}")
    private String amazonSecretKey;

    public AWSCredentialsProvider awsCredentialsProvider()
    {
        return new AWSStaticCredentialsProvider(amazonAWSCredentials());
    }

    @Bean
    public AWSCredentials amazonAWSCredentials()
    {
        return new BasicAWSCredentials(amazonAccessKey, amazonSecretKey);
    }

    @Bean
    public AmazonDynamoDB amazonDynamoDB()
    {
        return AmazonDynamoDBClientBuilder.standard().withCredentials(awsCredentialsProvider()).withRegion(Regions.US_EAST_1).build();
    }
}

We will need to pass accessKey and secretKey in application.properties. Importantly, we are creating an AmazonDynamoDB bean here.

Now, let’s start our application and we will see the log that shows it has created a connection with DynamoDB table Company.

Spring Boot Application Log - Dynamo DB Table

Once the application has started, we will access Postman for REST API.
Spring Boot POST API - Dynamo DB

Conclusion

Code for this demo is available on my github repository.

In this post, we showed how we can use Dynamo DB – a No SQL database in a Spring Boot application.

  • We went over Dynamo DB concepts.
  • And we created a Spring Boot application.
  • We created a Dynamo DB table in AWS.
  • We connected the Spring Boot application to the AWS Dynamo DB table.

References

Introduction to Serverless Architecture Patterns

In this post, I will cover Serverless Architecture Patterns. With multiple cloud providers, on-premise infrastructure is out of date. By simple definition, serverless can be the absence of a server. But is that true? Not really. To start, we will find out serverless architecture basics and then its benefits and drawbacks.

What is Serverless Architecture?

Lately, serverless architecture is becoming more of a trend. The developer still writes the server-side code, but it runs in stateless compute containers instead of traditional server architecture. An event triggers this code and a third party (like AWS Lambda) manages it. Basically, this is Function as a Service (FaaS). AWS Lambda is the most popular form of FaaS.

So the definition of Serverless Architecture –

Serverless Architecture is a design pattern where applications are hosted by a third-party service, eliminating the need for server software and hardware.

In traditional architecture, a user does activity on the UI side and that sends a request to the server where the server-side code does some database transaction. With this architecture, the client has no idea what is happening as most of the logic is on the server-side.

With Serverless Architecture, we will have multiple functions (lambdas) for individual services and the client UI will call them through API-Gateway.

Serverless Architecture Patterns - Initial Design

So in the above architecture

  1. A Client UI when accessed, authenticates the user through an authentication function that will interact with the user database.
  2. Similarly, once the user logs in, he can purchase or search for products using purchase and search functions.

In traditional server-based architecture, there was a central piece that was managing flow, control, and security. In Serverless architecture, there is no central piece. The drawback of serverless architecture is that we then rely on the underlying platform for security.

Why Serverless Architecture?

With a traditional architecture, one used to own a server, and then you would configure the webserver and the application. Then came the cloud revolution and now everyone wants to be on the cloud. Even with multiple cloud providers, we still need to manage the operating system on the server and web server.

What if there is a way where you can solely focus on the code and a service manages the server and the webserver. AWS provides Lambda, Microsoft Azure provides Function to take care of physical hardware, virtual operating systems, and webserver. This is how Serverless Architecture reduces the complexity by letting developers focus on code only.

Function As A Service (FAAS)

We have covered some ground with Serverless Architecture. But this pattern is possible only with Function as a Service. Lambda is one type of Function as a service. Basically, FaaS is about running backend code without managing any server or server applications.

One key advantage of FaaS like AWS Lambda is that you can use any programming language (like Java, Javascript, Python, Ruby) to code and the Lambda infrastructure will take care of setting up an environment for that language.

Another advantage of FaaS is horizontal scaling. It is mostly automatic and elastic. Cloud provider handles horizontal scaling.

Tools

There are a number of tools available for building serverless architecture functions. This particular Serverless Framework makes it building an easy process.

Benefits

  1. The key benefit of using Serverless Architecture is reduced operational cost. Once a cloud provider takes care of infrastructure, you do not have to focus on infrastructure.
  2. Faster Deployment, great flexibility – Speed helps with innovation. With faster deployment, serverless makes it easier to change functions and test the changes.
  3. Reduced scaling cost and time. With infrastructure providers handling most of the horizontal scaling, the application owner doesn’t have to worry about scaling.
  4. Focus more on UX. Therefore, developers can focus more on User Experience (UX) with architecture becoming easier.

Drawbacks

After all, there are tradeoffs with any architectural design.

  1. Vendor Control – By using an infrastructure vendor, we are giving away the control for backend service. We have to rely on their security infrastructure instead of designing our own.
  2. Running workloads could be more costly for serverless.
  3. Repetition of logic – Database migration means repetition of code and coordination for the new database.
  4. Startup latency issues – When the initial request comes, the platform must start the required resources before it can serve the request and this causes initial latency issues.

Conclusion

In this post, I discussed Serverless Architecture Patterns and what makes it possible to use this pattern. I also discussed the benefits and drawbacks. If you have more questions about Serverless Architecture, please post your comment and I will be happy to answer. You can subscribe to my blog here.

References

  1. Serverless Architecture – Serverless

How To Use Spring Security With SAML Protocol Binding

In this post, I will show how we can use Spring Security with SAML Protocol Binding to integrate with Keycloak Identity Provider. And, if you want to read on how to use Keycloak, you can read here.

What is SAML?

SAML stands for Security Assertion Markup Language. It’s an open standard for
exchanging authentication and authorization data between a service provider (SP) and identity provider (IdP).

Identity Provider – performs authentication and validates user identity for authorization and passes that to Service Provider.

Service Provider – Trusts the identity provider and provides access to the user to service based on authorization.

SAML Authentication Flow

As part of this flow, we will be building a simple To-Do List Application. Henceforth, a user will access the application and he will be redirected for authentication.

SAML Authentication User Flow:

  1. User accesses Service Provider (SP) ToDo List Application.
  2. Application redirects user to Keycloak login screen. During this redirect, the application sends an AuthnRequest to Keycloak IDP.
  3. Keycloak IDP validates the request if it is coming from the right relying party/service provider. It checks for issuer and redirect URI (ACS URL).
  4. Keycloak IDP sends a SAML response back to Service Provider.
  5. Service Provider validates the signed response with provided IDP public certificate.
  6. If the response is valid, we will extract the attribute NameID from the assertion and logged the user in.

 

How to use Spring Security with SAML Protocol

NoteSpring Security SAML extension was a library that used to provide SAML support.
But after 2018, Spring Security team moved that project and now supports SAML
2 authentication as part of core Spring Security.

Use Spring Security with SAML Protocol Binding

Therefore, once you create a Spring Boot project, we will need to import the following dependencies.


dependencies {
	implementation 'org.springframework.boot:spring-boot-starter-data-jpa'
	implementation 'org.springframework.boot:spring-boot-starter-jdbc'
	implementation 'org.springframework.boot:spring-boot-starter-thymeleaf'
	implementation 'org.springframework.boot:spring-boot-starter-web'
	/*
	 * Spring Security
	 */
	implementation 'org.springframework.boot:spring-boot-starter-security'
	runtimeOnly 'mysql:mysql-connector-java'
	providedRuntime 'org.springframework.boot:spring-boot-starter-tomcat'
	implementation 'org.springframework.security:spring-security-saml2-service-provider:5.3.5' +
			'.RELEASE'

	/*
	 * Keycloak
	 */
	implementation 'org.keycloak:keycloak-spring-boot-starter:11.0.3'
	testImplementation('org.springframework.boot:spring-boot-starter-test') {
		exclude group: 'org.junit.vintage', module: 'junit-vintage-engine'
	}
}

Accordingly, the dependency spring-security-saml2-service-provider will allow us to add relying party registration. It also helps with identity provider registration.

Now, we will add this registration in ourSecurityConfig as below:


    @Bean
    public RelyingPartyRegistrationRepository relyingPartyRegistrationRepository() throws CertificateException
    {
        final String idpEntityId = "http://localhost:8180/auth/realms/ToDoListSAMLApp";
        final String webSSOEndpoint = "http://localhost:8180/auth/realms/ToDoListSAMLApp/protocol/saml";
        final String registrationId = "keycloak";
        final String localEntityIdTemplate = "{baseUrl}/saml2/service-provider-metadata" +
                "/{registrationId}";
        final String acsUrlTemplate = "{baseUrl}/login/saml2/sso/{registrationId}";


        Saml2X509Credential idpVerificationCertificate;
        try (InputStream pub = new ClassPathResource("credentials/idp.cer").getInputStream())
        {
            X509Certificate c = (X509Certificate) CertificateFactory.getInstance("X.509").generateCertificate(pub);
            idpVerificationCertificate = new Saml2X509Credential(c, VERIFICATION);
        }
        catch (Exception e)
        {
            throw new RuntimeException(e);
        }

        RelyingPartyRegistration relyingPartyRegistration = RelyingPartyRegistration
                .withRegistrationId(registrationId)
                .providerDetails(config -> config.entityId(idpEntityId))
                .providerDetails(config -> config.webSsoUrl(webSSOEndpoint))
                .providerDetails(config -> config.signAuthNRequest(false))
                .credentials(c -> c.add(idpVerificationCertificate))
                .assertionConsumerServiceUrlTemplate(acsUrlTemplate)
                .build();

        return new InMemoryRelyingPartyRegistrationRepository(relyingPartyRegistration);
    }

Our login will also change with HttpSecurity as follows:

httpSecurity.authorizeRequests()
.antMatchers("/js/**","/css/**","/img/**").permitAll()
.antMatchers("/signup","/forgotpassword").permitAll()
.antMatchers("/saml/**").permitAll()
.anyRequest().authenticated()
.and()
.formLogin()
.loginPage("/login").permitAll()
.and()
.saml2Login(Customizer.withDefaults()).exceptionHandling(exception ->
exception.authenticationEntryPoint(entryPoint()))
.logout()
.logoutUrl("/logout")
.logoutSuccessHandler(logoutSuccessHandler)
.deleteCookies("JSESSIONID")
.permitAll();

We are now using saml2Login . By default, if you access the application, it will redirect to an identity provider. We want to configure our custom login page before it can be redirected to the identity provider – keycloak. That’s why we have authenticationEntryPoint which allows us to configure our custom login page. So now if we access our application at https://localhost:8743/login, we will see the below login page:

How To Use Spring Security with SAML Protocol - Login Screen

So once you select the option for Login with Keycloak SAML, it will send a AuthnRequest to Keycloak. Also, this request is an unsigned request. Keycloak will send a signed response. A controller will receive this signed response to decode NameId attribute.


@GetMapping(value="/index")
public String getHomePage(Model model, @AuthenticationPrincipal Saml2AuthenticatedPrincipal saml2AuthenticatedPrincipal)
{
   String principal = saml2AuthenticatedPrincipal.getName();
   model.addAttribute("username", principal);
   return "index";
}

Once the NameId is retrieved, it will log the user in.

Configuration on Keycloak

We will have to configure our application in the Keycloak administration console.

  • Create a REALM for your application.
  • Select Endpoints – SAML 2.0 IdP Metadata
  • In Clients – Add the service provider.
  • For your client, configure Root URL, SAML Processing URL (https://localhost:8743/saml2/service-provider-metadata/keycloak)
  • You can also adjust the other settings like – signing assertions, including AuthnStatement.
  • Certainly, configure the ACS URL in “Fine Grain SAML Endpoint Configuration” section.

Spring Security with SAML Protocol - Keycloak

Code Repository

The code for this project is available in my github repository. I also covered this in more detail in my book Simplifying Spring Security. To learn more, you can buy my book here.

Conclusion

In this post, I showed how to use Spring Security with SAML Protocol. Over the years, there have been a lot of improvements in Spring Security and now it can easily be used with different protocols like OAuth, OIDC. If you enjoyed this post, subscribe to my blog here.

Using Apache Kafka With Spring Boot

In this post, I will show how we can integrate Apache Kafka with a Spring Boot application. I will also show how we can send and consume messages from our application.

What is Apache Kafka?

I previously wrote an introductory post about kafka. But if you still don’t know anything about Kafka, then this will be a good summary.

Kafka is a stream processing platform, currently available as open-source software from Apache. Kafka provides low latency ingestion of large amounts of data.

Nevertheless, one key advantage of Kafka is it allows to move large amounts of data and process it in real-time. Kafka takes into account horizontal scaling, which means it can be scaled by increasing more brokers in Kafka cluster.

Kafka Terminology

So, Kafka has its own terminology. But, it is also easy to understand if you are starting up.

  1. Producer – A Producer is a client that produces a message and sends it to Kafka Server on a specified topic.
  2. Consumer – A Consumer is a client that consumes the message from Kafka topic.
  3. Cluster – Kafka is a distributed system of brokers. Multiple brokers make a cluster.
  4. Broker – Kafka broker can create a Kafka cluster by sharing information between each other directly or indirectly through zookeeper. Therefore, a broker receives a message from the producer and the consumer fetches this message from the broker by topic, partition and offset.
  5. Topic – A topic is a category name to which the producer sends messages to and consumer consumes the messages from.
  6. Partition – Messages to a topic are spread across kafka clusters into several partitions.
  7. Offset – Offset is a pointer to the last message that Kafka has sent to the consumer.

Setting Up Spring Boot Application

As part of this post, I will show how we can use Apache Kafka with a Spring Boot application.

We will run a Kafka Server on the machine and our application will send a message through the producer to a topic. Part of the application will consume this message through the consumer.

To start with, we will need a Kafka dependency in our project.


implementation 'org.springframework.kafka:spring-kafka:2.5.2'

We will have a REST controller which will basically take a JSON message and send it to Kafka topic using Kafka Producer.


package com.betterjavacode.kafkademo.resource;

import com.betterjavacode.kafkademo.model.Company;
import com.betterjavacode.kafkademo.producers.KafkaProducer;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.*;

@RestController
@RequestMapping(value = "/v1/kafka")
public class KafkaRestController
{
    private final KafkaProducer kafkaProducer;

    @Autowired
    public KafkaRestController(KafkaProducer kafkaProducer)
    {
        this.kafkaProducer = kafkaProducer;
    }

    @PostMapping(value = "/send", consumes={"application/json"}, produces = {"application/json"})
    public void sendMessageToKafkaTopic(@RequestBody Company company)
    {
        this.kafkaProducer.sendMessage(company);
    }
}

This KafkaProducer uses KafkaTemplate to send the message to a topic. KafkaProducer is a service that we created for this application.


@Service
public class KafkaProducer
{
    private static final Logger logger = LoggerFactory.getLogger(KafkaProducer.class);
    private static final String topic = "companies";

    @Autowired
    private KafkaTemplate<String, Company> kafkaTemplate;

    public void sendMessage(Company company)
    {
        logger.info(String.format("Outgoing Message - Producing -> %s", company));
        this.kafkaTemplate.send(topic, company);
    }
}

Similarly, we will have our consumer.


@Service
public class KafkaConsumer
{
    private final Logger logger = LoggerFactory.getLogger(KafkaConsumer.class);

    @KafkaListener(topics = "companies", groupId = "group_id")
    public void consume(String company)
    {
        logger.info(String.format("Incoming Message - Consuming -> %s", company));
    }
}

Our consumer is using KafkaListener which allows us to listen to a topic. When a message comes to this topic, the listener alerts the consumer and the consumer picks up that message.

Kafka Configuration

Before, showing how we will send and consume these messages, we still need to configure kafka cluster in our application. There are also some other properties our producer and consumer will need.

Basically, there are two ways to configure these properties. Either programmatically OR through YML configuration. For our application, I have configured this through yaml configuration file.


server:
  port: 9000
spring:
  kafka:
    consumer:
      bootstrap-servers: localhost:9092
      group-id: group_id
      auto-offset-reset: earliest
      key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      properties:
        spring.json.trusted.packages: "com.betterjavacode.kafkademo.model"
    producer:
      bootstrap-servers: localhost:9092
      key-serializer: org.apache.kafka.common.serialization.StringSerializer
      value-serializer: org.springframework.kafka.support.serializer.JsonSerializer

bootstrap-servers – Host and Port on which Kafka Server is running
key-deserializer – class to use to deserialize the key of the message that the consumer consumes
value-deserializer – class to use to deserialize the value of the message that the consumer consumes
key-serializer – Serializer class to serialize the key of the message that the producer produces
value-serializer – Serializer class to serialize the value of the message
group-id – This specifies the group to which the consumer belongs to

auto-offset-reset – Each message in a partition of a Kafka topic has a unique identifier which is the offset. This setting automatically resets the offset to the earliest.

Before we can send messages, our topic must exist. I will manually create this topic and show in the next section. In the Spring Boot application, we can create a KafkaAdmin bean and use it to create topics as well.

Running Kafka Server

As part of this demo, I will also show how we can start our first instance of Kafka server. Download the Kafka from here.

Once you download and extract in a directory, we will have to edit two properties in zookeeper.properties and server.properties.

Update zookeeper.properties with data directory (dataDir) where you want a zookeeper to store data.

Start the zookeeper – zookeeper-server-start.bat ../../config/zookeeper.properties

Update server.properties with logs directory.

Start the server – kafka-server-start.bat ../../config/server.properties

Now to create a topic, we can run another command prompt and use this command
kafka-topics.bat –create –topic companies–bootstrap-server localhost:9092  from the directory where we installed kafka.

Sending and Consuming Messages

So far, we have created a Spring Boot application with Kafka. We have configured our Kafka server. Now comes the moment to send and consume messages to Kafka topic.

We have a REST API to send a message about a company. This message then Kafka Producer will send it to Kafka Topic companies. A Kafka consumer will read this message from the topic as it will be listening to it.

Use Apache Kafka With Spring Boot - REST API

Once this message is sent, we can take a look at our application server log as it will show about producing this message through Kafka Producer.

Apache Kafka With Spring Boot - Producer

At the same time, our Consumer is listening. So, it will pick up this message to consume.

Apache Kafka with Spring Boot - Consumer Message

 

Conclusion

So far, we have shown how we can use Apache Kafka with the Spring Boot application. With data in applications is increasing, it becomes important to stream with low latency. The code repository for this demo application is here.

If you want me to cover anything else with Kafka, feel free to comment on this post and I will cover more details about Kafka in upcoming posts.

What Makes a Good Junior Developer

What makes a good Junior Developer? Yes, I will talk about some qualities every junior developer should develop to do better in this role. Now Junior Developer is a broad term, it can include Associate Software Engineers, Software Engineers, or Developers.

Once I was a Junior Developer too. Now I am in a senior role, but that doesn’t take away from me to still be a junior to other Seniors. So, I wish there was some kind of a guide to help junior developers to succeed in their roles.

Qualities that will help you succeed as a Junior Developer

  1. Be open-minded to take up a challenge – One quality I really appreciate about the junior developers that I have worked with so far is to take up a challenge. In the initial days, you want to learn as much as you can. Sometimes, it can be overwhelming, other times, it can be boring. But learn and read.
  2. Take ownership of the task you work on – If you get a task to work on, take ownership of that task till its completion. You can create trust with your peers by taking ownership of the task. If you get stuck with the task, ask questions about it to your seniors. Senior Developers are meant to help you.
  3. Ask questions – As a senior developer, I really appreciate developers who ask questions. Even if those questions can be easy to answer. If you are stuck or don’t know, ask the question. Even in meetings, if you ask a question, everybody should appreciate it. A question brings a different perspective. And every perspective matters.
  4. Help others – One way to build your career at any organization is by helping others as much as you can. So, help others. You solved a problem that many other developers are facing, share that knowledge. If you built an automation tool, share the tool. Other junior developers come to you, help them. Be so good that everyone comes to you.

How to understand the system as a Junior Developer

Everyone has their own way to learn any skill or a complex system. But there are a few tried and tested ideas that can help you understand the system. This is definitely helpful in your initial days at any organization as a junior developer.

  1. Read and take notes. If there is documentation for the system architecture, read as much as you can. This will help you get the bigger picture of the system. If there is no documentation, then you are up against a challenge. Try to identify the components of the system that communicate with each other. Start creating documentation yourself so the future junior developers can thank you.
  2. Take up a task. Once you get a bigger picture, take up a task and work on it. Do a microanalysis of part of the system that you work on. Now this will provide you both a short distance view as well as a long-distance view of your system. Also, remember understanding any complex system takes time, so don’t be discouraged if you do not understand everything in a month or two.
  3. Read about system design in general. One way to build up your knowledge about any complex system is by reading about system design. Another way is to read the engineering blog of your company. A lot of passionate developers write these engineering blogs to share their knowledge worldwide.

Tips to succeed as a Junior Developer

  1. Read code – Read the code of the system you are working on. This will make you comfortable with code as well. Create your questions based on reading the code that you can ask senior developers.
  2. Participate in Code Review – One thing you will learn from code review to see how senior developers help in improving the code quality, so you can adapt to that quality. You will also learn to review others’ code.
  3. Write unit test cases – For whatever code you write, write unit test cases. This will set you apart from other developers.
  4. Start understanding where your team is struggling – Become a person who can identify a problem and find a solution. It’s very rare to find high-agency developers, but you can build up that skill. See where your team is struggling and how you can help the team in not struggling. Every team struggles with something. If as a Junior Developer, you have time, then help your team by building a tool or creating documentation of the system that has been ignored.
  5. Do not compare – Do not compare yourself with fellow junior or senior developers. Everyone’s path is different. So focus on what you can and can not do. Improve your skills.
  6. Ask for feedback – If there is no way to get feedback from a manager or senior developers, check in with your seniors once a month to get feedback. Feedback helps to improve. Find your own weaknesses and work on them.

Conclusion

In this post, I shared some tips and skills that a Junior Developer can use to be successful. If you enjoyed this post, comment on this post telling me what skill you think can make a Junior Developer stand out.

If you are still looking for my book Simplifying Spring Security, it is here to buy.