Question:- Mention with how many valves does Tomcat configured with?
Answer:- Four types of valves Tomcat is configured with • Access Log • Remote Address Filter • Remote Host Filter • Request Dumper
Question:- Explain how servlet life cycles?
Answer:- The life-cycle of a typical servlet running on Tomcat • Tom-cat receives a request from a client through one of its connectors • For processing, this request Tomcat maps this request to appropriate • Once the request has been directed to the appropriate servlet, Tomcat verifies that servlet class has been loaded. If it is not than Tomcat wraps the servlet into Java Bytecode, that is executable by the JVM and forms an instance of the servlet • Tomcat initiates the servlet by calling its init The servlet contains code that is able to screen Tomcat configuration files and act accordingly, as well as declare any resources it might require • Once the servlet has been started, Tomcat can call the servlet’s service method to proceed the request • Tomcat and the servlet can co-ordinate or communicate through the use of listener classes during the servlet’s lifecycle, which tracks the servlet for a variety of state changes. • To remove the servlet, Tomcat calls the servlets destroy method.
Question:- Explain what is the purpose of NAT protocol?
Answer:- The purpose of NAT protocol is to hide private IP address from public IP address and give a certain level of security to the organization.
Question:- Explain what does the MAC stands for?
Answer:- MAC means Medium Access Control
Question:- Explain what is Tomcat Coyote?
Answer:- Tom coyote is an HTTP connector based on HTTP/ 1.1 specification which receives and transport web requests to the Tomcat engine by listening to a TCP/IP port and sent request back to the requesting client.
Question:- Mention what is Apache Kafka?
Answer:- Apache Kafka is a publish-subscribe messaging system developed by Apache written in Scala. It is a distributed, partitioned and replicated log service.
Question:- Mention what is the traditional method of message transfer?
Answer:- The traditional method of message transfer includes two methods • Queuing: In a queuing, a pool of consumers may read message from the server and each message goes to one of them • Publish-Subscribe: In this model, messages are broadcasted to all consumers Kafka caters single consumer abstraction that generalized both of the above- the consumer group.
Question:- Mention what is the benefits of Apache Kafka over the traditional technique?
Answer:- Apache Kafka has following benefits above traditional messaging technique • Fast: A single Kafka broker can serve thousands of clients by handling megabytes of reads and writes per second • Scalable: Data are partitioned and streamlined over a cluster of machines to enable larger data • Durable: Messages are persistent and is replicated within the cluster to prevent data loss • Distributed by Design: It provides fault tolerance guarantees and durability
Question:- Mention what is the meaning of broker in Kafka?
Answer:- In Kafka cluster, broker term is used to refer Server.
Question:- Mention what is the maximum size of the message does Kafka server can receive?
Answer:- The maximum size of the message that Kafka server can receive is 1000000 bytes.
Question:- Explain what is Zookeeper in Kafka? Can we use Kafka without Zookeeper?
Answer:- Zookeeper is an open source, high-performance co-ordination service used for distributed applications adapted by Kafka. No, it is not possible to bye-pass Zookeeper and connect straight to the Kafka broker. Once the Zookeeper is down, it cannot serve client request. • Zookeeper is basically used to communicate between different nodes in a cluster • In Kafka, it is used to commit offset, so if node fails in any case it can be retrieved from the previously committed offset • Apart from this it also does other activities like leader detection, distributed synchronization, configuration management, identifies when a new node leaves or joins, the cluster, node status in real time, etc.
Question:- Explain how message is consumed by consumer in Kafka?
Answer:- Transfer of messages in Kafka is done by using sendfile API. It enables the transfer of bytes from the socket to disk via kernel space saving copies and call between kernel user back to the kernel.
Question:- Explain how you can improve the throughput of a remote consumer?
Answer:- If the consumer is located in a different data center from the broker, you may require to tune the socket buffer size to amortize the long network latency.
Question:- Explain how you can get exactly once messaging from Kafka during data production?
Answer:- During data, production to get exactly once messaging from Kafka you have to follow two things avoiding duplicates during data consumption and avoiding duplication during data production. Here are the two ways to get exactly one semantics while data production: • Avail a single writer per partition, every time you get a network error checks the last message in that partition to see if your last write succeeded • In the message include a primary key (UUID or something) and de-duplicate on the consumer
