DevOps Training Course Online – Learn & Get Certified.!

Coding compiler DevOps Training – Learn DevOps online and master the DevOps tools and concepts from the industry expert trainers with real-time projects. Enrol now for DevOps course and become job ready. Book your FREE demo now.!

Continue reading “DevOps Training Course Online – Learn & Get Certified.!”

Vagrant Interview Questions And Answers 2018

Vagrant Interview Questions And Answers 2018. Here Coding compiler sharing a list of 30 Vagrant interview questions for Devops. These Vagrant questions will help you to crack your next Vagrant Devops job interview.

Vagrant Interview Questions

  1. What is Vagrant?
  2. Vagrant is written in which language?
  3. What is a BOX in Vagrant?
  4. What is Provider in Vagrant?
  5. What is Provisioner in Vagrant?
  6. What are the subcommands associated with Box command?
  7. Explain Box Add Command in Vagrant?
  8. What is Box List command in Vagrant?
  9. What is Box Outdated command in Vagrant?
  10. What is Box Prune command in Vagrant?

Vagrant Interview Questions And Answers

1) What is Vagrant?

A) Vagrant is an open-source software product for building and maintaining portable virtual software development environments, e.g. for VirtualBox, Hyper-V, Docker, VMware, and AWS.

2) Vagrant is written in which language?

A) Vagrant is written in Ruby language.

3) What is a BOX in Vagrant?

A) A box is a packaged Vagrant environment, typically a virtual machine.

4) What is Provider in Vagrant?

A) A provider is the location in which the virtual environment runs. It can be local (the default is to use VirtualBox), remote, or even a special case like a Docker container.

5) What is Provisioner in Vagrant?

A) A provisioner is a tool to set up the virtual environment, and can be as simple as a shell script, but alternatively a more advanced tool like Chef, Puppet, or Ansible can be used.

6) What are the subcommands associated with Box command?

A) Box command used to manage (add, remove, etc.) boxes.

Command: vagrant box

The main functionality of this command is exposed via even more subcommands:

  • add
  • list
  • outdated
  • prune
  • remove
  • repackage
  • update

7) Explain Box Add Command in Vagrant?

A) Command: vagrant box add ADDRESS

This adds a box with the given address to Vagrant.

8) What is Box List command in Vagrant?

A) Command: vagrant box list

This command lists all the boxes that are installed into Vagrant.

9) What is Box Outdated command in Vagrant?

A) Command: vagrant box outdated

This command tells you whether or not the box you are using in your current Vagrant environment is outdated.

10) What is Box Prune command in Vagrant?

A) Command: vagrant box prune

This command removes old versions of installed boxes. If the box is currently in use vagrant will ask for confirmation.

Vagrant DevOps Interview Questions

11) What is Box Remove command in Vagrant?

A) Command: vagrant box remove NAME

This command removes a box from Vagrant that matches the given name.

12) What is Box Repackage command in Vagrant?

A) Command: vagrant box repackage NAME PROVIDER VERSION

This command repackages the given box and puts it in the current directory so you can redistribute it. The name, provider, and version of the box can be retrieved using vagrant box list.

13) What is Box Update command in Vagrant?

A) Command: vagrant box update

This command updates the box for the current Vagrant environment if there are updates available.

14) What is Connect command in Vagrant?

A) Command: vagrant connect NAME

The connect command complements the share command by enabling access to shared environments.

15) Destroy command in Vagrant

A) Command: vagrant destroy [name|id]

16) Global Status command in Vagrant

A) Command: vagrant global-status

17) What is Vagrant Share?

A) Vagrant Share allows you to share your Vagrant environment with anyone in the world, enabling collaboration directly in your Vagrant environment in almost any network environment with just a single command: vagrant share.
This command will tell you the state of all active Vagrant environments on the system for the currently logged in user.
This command stops the running machine Vagrant is managing and destroys all resources that were created during the machine creation process.

18) What is Vagrantfile?

A) The primary function of the Vagrantfile is to describe the type of machine required for a project, and how to configure and provision these machines.

19) What is Provisioning in Vagrant?

A) Provisioners in Vagrant allow you to automatically install software, alter configurations, and more on the machine as part of the vagrant up process.

20) What are Synced Folders in Vagrant?

A) Synced folders enable Vagrant to sync a folder on the host machine to the guest machine, allowing you to continue working on your project’s files on your host machine, but use the resources in the guest machine to compile or run your project.

Vagrant Tool Interview Questions And Answers

21) What is Multi-Machine environment in Vagrant?

A) Vagrant is able to define and control multiple guest machines per Vagrantfile. This is known as a “multi-machine” environment.

These machines are generally able to work together or are somehow associated with each other. Here are some use-cases people are using multi-machine environments for today:

Accurately modeling a multi-server production topology, such as separating a web and database server.
Modeling a distributed system and how they interact with each other.
Testing an interface, such as an API to a service component.
Disaster-case testing: machines dying, network partitions, slow networks, inconsistent world views, etc.

22) How do you define multiple machines in Vagrant?

A) Multiple machines are defined within the same project Vagrantfile using the config.vm.define method call.

23) What are Providers in Vagrant?

A) While Vagrant ships out of the box with support for VirtualBox, Hyper-V, and Docker, Vagrant has the ability to manage other types of machines as well. This is done by using other providers with Vagrant.

24) What are Plugins in Vagrant and how they assist?

A) Vagrant comes with many great features out of the box to get your environments up and running. Sometimes, however, you want to change the way Vagrant does something or add additional functionality to Vagrant. This can be done via Vagrant plugins.

25) What is Vagrant Push?

A) Vagrant is capable of deploying or “pushing” application code in the same directory as your Vagrantfile to a remote such as an FTP server.

Pushes are defined in an application’s Vagrantfile and are invoked using the vagrant push subcommand.

26) What is vagrant in DevOps?
A) DevOps is a lot more than configuration management.  Vagrant is another tool to help your organization transition to a DevOps culture. Vagrant also helps improve your entire workflow of using Puppet, improving development and process for both developers and operations.
27) What is a vagrant image?
A) The Vagrantfile has some information that will be merged into your Vagrantfile that is created when you run vagrant init boxname in a folder. The box-disk.vmdk is the virtual hard disk drive. The box.ovf defines the virtual hardware for thebox.
28) What is vagrant Linux?
A) Vagrant manages virtual machines hosted in Oracle VirtualBox, a full x86 virtualizer that is also open source (GPLv2). A virtual machine is a software implementation of a computer, running a complete operating system stack on a virtualizer. It is a full implementation of a computer with a virtual disk, memory and CPU.
29) What is vagrant virtualbox?
A) Vagrant comes with support out of the box for VirtualBox, a free, cross-platform consumer virtualization product. VirtualBox can be installed by downloading a package or installer for your operating system and using standard procedures to install that package.
30) What is Ansible and vagrant?
A) Vagrant is a tool to manage virtual machine environments, and allows you to configure and use reproducible work environments on top of various virtualization and cloud platforms. It also has integration with Ansible as a provisioner for these virtual machines, and the two tools work together well.

OTHER INTERVIEW QUESTIONS

  1. 60 Java Multiple Choice Questions
  2. 40 Core Java MCQ Questions
  3. Anaplan Interview Questions And Answers
  4. Tableau Multiple Choice Questions
  5. Python Coding Interview Questions
  6. CSS3 Interview Questions
  7. Linux Administrator Interview Questions
  8. SQL Interview Questions
  9. Hibernate Interview Questions
  10. Kubernetes Interview Questions
  11. Kibana Interview Questions
  12. Nagios Interview Questions
  13. Jenkins Interview Questions
  14. Chef Interview Questions
  15. Puppet Interview Questions
  16. RPA Interview Questions And Answers
  17. Android Interview Questions
  18. Mulesoft Interview Questions
  19. JSON Interview Questions
  20. PeopleSoft HRMS Interview Questions
  21. PeopleSoft Functional Interview Questions
  22. PeopleTools Interview Questions
  23. Peoplesoft Technical Interview Questions
  24. 199 Peoplesoft Interview Questions
  25. 200 Blue Prism Interview Questions
  26. Visualforce Interview Questions
  27. Salesforce Interview Questions
  28. 300 SSIS Interview Questions
  29. PHP Interview Questions And Answers
  30. Alteryx Interview Questions
  31. AWS Cloud Support Interview Questions
  32. Google Kubernetes Engine Interview Questions
  33. AWS Devops Interview Questions

Most Asked AWS Devops Interview Questions And Answers

AWS Devops Interview Questions And Answers For Experienced 2018. Here in this blog post Coding compiler presenting a list of 50 AWS Interview Questions for Devops professionals. These 50 Devops interview questions prepared by AWS DevOps experts and were asked in various MNC companies for AWS DevOps Job Interview. Read these AWS Devops interview questions and answers to crack you next AWS Job Interview. All the best for your future and happy learning.

AWS Devops Interview Questions

  1. What is Amazon Web Services in DevOps?
  2. What is the role of a DevOps engineer?
  3.  What is Dev Ops with cloud computing?
  4. Why do we use AWS for DevOps?
  5. What is DevOps Tooling by AWS?
  6. How do you handle Continuous Integration and Continuous Delivery in AWS Devops?
  7. What is AWS CodePipeline in AWS Devops?
  8. What is AWS CodeBuild in AWS Devops?
  9. What is AWS CodeDeploy in AWS Devops?
  10. What is AWS CodeStar in AWS Devops?
  11. What is Amazon Elastic Container Service in AWS Devops?
  12. What is AWS Lambda in AWS Devops?
  13. What are AWS Developer Tools?
  14. What is CodeCommit in AWS Devops?
  15. What are the benefits of AWS CodeBuild in AWS Devops?
  16. What is Amazon EC2 in AWS Devops?
  17. What is Amazon S3 in AWS Devops?
  18. What is Amazon RDS in AWS Devops?
  19. What is AWS Lambda in AWS Devops?
  20. What is Amazon QuickSight in AWS Devops?
  21. What is AWS IoT in AWS Devops?
  22. What are the benefits of AWS CodeDeploy in AWS Devops?

AWS Devops Interview Questions And Answers

Question # 1) What is Amazon Web Services in DevOps?

Answer # AWS provides services that help you practice DevOps at your company and that are built first for use with AWS. These tools automate manual tasks, help teams manage complex environments at scale, and keep engineers in control of the high velocity that is enabled by DevOps.

 

Question # 2) What is the role of a DevOps engineer?

Answer # There’s no formal career track for becoming a DevOps engineer. They are either developers who get interested in deployment and network operations, or sysadmins who have a passion for scripting and coding, and move into the development side where they can improve the planning of test and deployment.

 

Question # 3) What is Dev Ops with cloud computing?

Answer # Inseparable development and operations practices are universally relevant. Cloud computing, Agile development, and DevOps are interlocking parts of a strategy for transforming IT into a business adaptability enabler. If cloud is an instrument, then DevOps is the musician that plays it.

 

Question # 4) Why do we use AWS for DevOps?

Answer # There are many benefits of using AWS for devops, thery are:

  • Get Started Fast – Each AWS service is ready to use if you have an AWS account. There is no setup required or software to install.
  • Fully Managed Services – These services can help you take advantage of AWS resources quicker. You can worry less about setting up, installing, and operating infrastructure on your own. This lets you focus on your core product.
  • Built for Scale – You can manage a single instance or scale to thousands using AWS services. These services help you make the most of flexible compute resources by simplifying provisioning, configuration, and scaling.
  • Programmable – You have the option to use each service via the AWS Command Line Interface or through APIs and SDKs. You can also model and provision AWS resources and your entire AWS infrastructure using declarative AWS CloudFormation templates.
  • Automation – AWS helps you use automation so you can build faster and more efficiently. Using AWS services, you can automate manual tasks or processes such as deployments, development & test workflows, container management, and configuration management.
  • Secure – Use AWS Identity and Access Management (IAM) to set user permissions and policies. This gives you granular control over who can access your resources and how they access those resources.
  • Large Partner Ecosystem – AWS supports a large ecosystem of partners which integrate with and extend AWS services. Use your preferred third-party and open source tools with AWS to build an end-to-end solution.
  • Pay-As-You-Go – With AWS purchase services as you need them and only for the period when you plan to use them. AWS pricing has no upfront fees, termination penalties, or long term contracts. The AWS Free Tier helps you get started with AWS.

 

Question # 5) What is DevOps Tooling by AWS?

Answer # AWS provides services that help you practice DevOps at your company and that are built first for use with AWS. These tools automate manual tasks, help teams manage complex environments at scale, and keep engineers in control of the high velocity that is enabled by DevOps.

Related Article: Chef Interview Questions

Question # 6) How do you handle Continuous Integration and Continuous Delivery in AWS Devops?

Answer # The AWS Developer Tools help you securely store and version your application’s source code and automatically build, test, and deploy your application to AWS or your on-premises environment.

Start with AWS CodePipeline to build a continuous integration or continuous delivery workflow that uses AWS CodeBuild, AWS CodeDeploy, and other tools, or use each service separately.

 

Question # 7) What is AWS CodePipeline in AWS Devops?

A) AWS CodePipeline is a continuous integration and continuous delivery service for fast and reliable application and infrastructure updates. CodePipeline builds, tests, and deploys your code every time there is a code change, based on the release process models you define. This enables you to rapidly and reliably deliver features and updates.

 

Question # 8) What is AWS CodeBuild in AWS Devops?

Answer # AWS CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages that are ready to deploy. With CodeBuild, you don’t need to provision, manage, and scale your own build servers. CodeBuild scales continuously and processes multiple builds concurrently, so your builds are not left waiting in a queue.

 

Question # 9) What is AWS CodeDeploy in AWS Devops?

Answer # AWS CodeDeploy automates code deployments to any instance, including Amazon EC2 instances and on-premises servers. AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during application deployment, and handles the complexity of updating your applications.

 

Question # 10) What is AWS CodeStar in AWS Devops?

Answer # AWS CodeStar enables you to quickly develop, build, and deploy applications on AWS. AWS CodeStar provides a unified user interface, enabling you to easily manage your software development activities in one place. With AWS CodeStar, you can set up your entire continuous delivery toolchain in minutes, allowing you to start releasing code faster.

Top AWS Devops Interview Questions

AWS Devops Interview Questions # 11) How Instacart uses AWS Devops?

Answer # Instacart uses AWS CodeDeploy to automate deployments for all of its front-end and back-end services. Using AWS CodeDeploy has enabled Instacart’s developers to focus on their product and worry less about deployment operations.

Related Article: Puppet Interview Questions

AWS Devops Interview Questions # 12) How lululemon athletica uses AWS Devops?

Answer # lululemon athletica uses a variety of AWS services to engineer a fully automated, continuous integration and delivery system. lululemon deploys artifacts distributed via Amazon S3 using AWS CodePipeline. From this stage, the artifacts are deployed to AWS Elastic Beanstalk.

 

AWS Devops Interview Questions # 13) What is Amazon Elastic Container Service in AWS Devops?

Answer # Amazon Elastic Container Service (ECS) is a highly scalable, high performance container management service that supports Docker containers and allows you to easily run applications on a managed cluster of Amazon EC2 instances.

 

AWS Devops Interview Questions # 14) What is AWS Lambda in AWS Devops?

Answer # AWS Lambda lets you run code without provisioning or managing servers. With Lambda, you can run code for virtually any type of application or backend service – all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability.

Related Article: Kubernetes Interview Questions

AWS Devops Interview Questions # 15) What are AWS Developer Tools?

Answer # The AWS Developer Tools is a set of services designed to enable developers and IT operations professionals practicing DevOps to rapidly and safely deliver software.

Together, these services help you securely store and version control your application’s source code and automatically build, test, and deploy your application to AWS or your on-premises environment. You can use AWS CodePipeline to orchestrate an end-to-end software release workflow using these services and third-party tools or integrate each service independently with your existing tools.

 

AWS Devops Interview Questions # 16) What is CodeCommit in AWS Devops?

Answer # AWS CodeCommit is a fully-managed source control service that makes it easy for companies to host secure and highly scalable private Git repositories. CodeCommit eliminates the need to operate your own source control system or worry about scaling its infrastructure. You can use CodeCommit to securely store anything from source code to binaries, and it works seamlessly with your existing Git tools.

 

AWS Devops Interview Questions # 17) What are the benefits of AWS CodeBuild in AWS Devops?

Answer # AWS CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages that are ready to deploy. With CodeBuild, you don’t need to provision, manage, and scale your own build servers.

CodeBuild scales continuously and processes multiple builds concurrently, so your builds are not left waiting in a queue. You can get started quickly by using prepackaged build environments, or you can create custom build environments that use your own build tools. With CodeBuild, you are charged by the minute for the compute resources you use.

AWS CodeBuild Benefits:

  • Fully Managed Build Service – AWS CodeBuild eliminates the need to set up, patch, update, and manage your own build servers and software. There is no software to install or manage.
  • Continuous Scaling – AWS CodeBuild scales automatically to meet your build volume. It immediately processes each build you submit and can run separate builds concurrently, which means your builds are not left waiting in a queue.
  • Pay as You Go – With AWS CodeBuild, you are charged based on the number of minutes it takes to complete your build.
  • Extensible – You can bring your own build tools and programming runtimes to use with AWS CodeBuild by creating customized build environments in addition to the prepackaged build tools and runtimes supported by CodeBuild.
  • Enables Continuous Integration and Delivery – AWS CodeBuild belongs to a family of AWS Code Services, which you can use to create complete, automated software release workflows for continuous integration and delivery (CI/CD). You can also integrate CodeBuild into your existing CI/CD workflow.
  • Secure – With AWS CodeBuild, your build artifacts are encrypted with customer-specific keys that are managed by the AWS Key Management Service (KMS). CodeBuild is integrated with AWS Identity and Access Management (IAM), so you can assign user-specific permissions to your build projects.

Related Article: Docker Interview Questions

AWS Devops Interview Questions # 18) What is Amazon EC2 in AWS Devops?

Answer # Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers.

 

AWS Devops Interview Questions # 19) What is Amazon S3 in AWS Devops?

Answer # Amazon Simple Storage Service (Amazon S3) is object storage with a simple web service interface to store and retrieve any amount of data from anywhere on the web.

 

AWS Devops Interview Questions # 20) What is Amazon RDS in AWS Devops?

Answer # Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud.

AWS Devops Interview Questions And Answers For Experienced

Question # 21) What is AWS Lambda in AWS Devops?

Answer # AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume – there is no charge when your code is not running.

Related Article: Kubernetes Interview Questions

Question # 22) What is Amazon QuickSight in AWS Devops?

Answer # Amazon QuickSight is a fast, cloud-powered business analytics service that makes it easy to build visualizations, perform ad-hoc analysis, and quickly get business insights from your data.

 

Question # 23) What is AWS IoT in AWS Devops?

Answer # AWS IoT is a managed cloud platform that lets connected devices easily and securely interact with cloud applications and other devices.

 

Question # 24) What are the benefits of AWS CodeDeploy in AWS Devops?

Answer # AWS CodeDeploy is a service that automates software deployments to a variety of compute services including Amazon EC2, AWS Lambda, and instances running on-premises.

AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during application deployment, and handles the complexity of updating your applications.

AWS CodeDeploy Benefits:

  • Automated Deployments – AWS CodeDeploy fully automates your software deployments, allowing you to deploy reliably and rapidly. You can consistently deploy your application across your development, test, and production environments whether deploying to Amazon EC2, AWS Lambda, or instances running on-premises. The service scales with your infrastructure so you can deploy to one Lambda function or thousands of EC2 instances.
  • Minimize Downtime – AWS CodeDeploy helps maximize your application availability during the software deployment process. It introduces changes incrementally and tracks application health according to configurable rules. Software deployments can easily be stopped and rolled back if there are errors.
  • Centralized Control – AWS CodeDeploy allows you to easily launch and track the status of your application deployments through the AWS Management Console or the AWS CLI. CodeDeploy gives you a detailed report allowing you to view when and to where each application revision was deployed.
  • Easy To Adopt – AWS CodeDeploy is platform and language agnostic, works with any application, and provides the same experience whether you’re deploying to Amazon EC2 or AWS Lambda. You can easily reuse your existing setup code. CodeDeploy can also integrate with your existing software release process or continuous delivery toolchain (e.g., AWS CodePipeline, GitHub, Jenkins).

Advanced AWS Devops Interview Questions

Question # 25) How can you use CodeBuild to automate your release process?

Answer # Yes. CodeBuild is integrated with AWS CodePipeline. You can add a build action and set up a continuous integration and continuous delivery process that runs in the cloud.

 

Question # 26) What is a build project in AWS Devops?

Answer # A build project is used to define how CodeBuild will run a build. It includes information such as where to get the source code, which build environment to use, the build commands to run, and where to store the build output. A build environment is the combination of operating system, programming language runtime, and tools used by CodeBuild to run a build.

Related Article: Ansible Interview Questions

Question # 27) How do you configure a build project in AWS Devops?

Answer # A build project can be configured through the console or the AWS CLI. You specify the source repository location, the runtime environment, the build commands, the IAM role assumed by the container, and the compute class required to run the build. Optionally, you can specify build commands in a buildspec.yml file.

 

Question # 28) Which source repositories does CodeBuild support in AWS Devops?

Answer # CodeBuild can connect to AWS CodeCommit, S3, and GitHub to pull source code for builds.

 

Question # 29) Which programming frameworks does CodeBuild support in AWS Devops?

Answer # CodeBuild provides preconfigured environments for supported versions of Java, Ruby, Python, Go, Node.js, Android, and Docker. You can also customize your own environment by creating a Docker image and uploading it to the Amazon EC2 Container Registry or the Docker Hub registry. You can then reference this custom image in your build project.

Related Article: Nagios Interview Questions

Question # 30) What happens when a build is run in CodeBuild in AWS Devops?

Answer # CodeBuild will create a temporary compute container of the class defined in the build project, load it with the specified runtime environment, download the source code, execute the commands configured in the project, upload the generated artifact to an S3 bucket, and then destroy the compute container. During the build, CodeBuild will stream the build output to the service console and Amazon CloudWatch Logs.

Most Popular AWS Devops Interview Questions

AWS Devops Interview Questions # 31) How do you set up your first build in CodeBuild in AWS Devops?

Answer # Sign in to the AWS Management Console, create a build project, and then run a build.

 

AWS Devops Interview Questions # 32) How can you use CodeBuild with Jenkins in AWS Devops?

Answer # Yes. The CodeBuild Plugin for Jenkins can be used to integrate CodeBuild into Jenkins jobs. The build jobs are sent to CodeBuild, eliminating the need for provisioning and managing the Jenkins worker nodes.

 

AWS Devops Interview Questions # 33) How can you view past build results in AWS CodeBuild?

Answer # You can access your past build results through the console or the API. The results include outcome (success or failure), build duration, output artifact location, and log location.

 

AWS Devops Interview Questions # 34) How can you debug a past build failure in AWS CodeBuild?

Answer # You can debug a build by inspecting the detailed logs generated during the build run.

 

AWS Devops Interview Questions # 35) What types of applications can you build with AWS CodeStar?

Answer # CodeStar can be used for building web applications, web services and more. The applications run on Amazon EC2, AWS Elastic Beanstalk or AWS Lambda. Project templates are available in several different programming languages including Java, Node.js (Javascript), PHP, Python and Ruby.

 

AWS Devops Interview Questions # 36) How do you add, remove or change users for my AWS CodeStar projects?

Answer # You can add, change or remove users for your CodeStar project through the “Team” section of the CodeStar console. You can choose to grant the users Owner, Contributor or Viewer permissions. You can also remove users or change their roles at any time.

 

AWS Devops Interview Questions # 37) How do AWS CodeStar users relate to IAM users?

Answer # CodeStar users are IAM users that are managed by CodeStar to provide pre-built, role-based access policies across your development environment; Because CodeStar users are built on IAM, you still get the administrative benefits of IAM. For example, if you add an existing IAM user to a CodeStar project, the existing global account policies in IAM are still enforced.

 

AWS Devops Interview Questions # 38) Can I work on my AWS CodeStar projects directly from an IDE?

Answer # Yes. By installing the AWS Toolkit for Eclipse or Visual Studio you gain the ability to easily configure your local development environment to work with CodeStar Projects; Once installed, developers can then select from a list of available CodeStar projects and have their development tooling automatically configured to clone and checkout their project’s source code, all from within their IDE.

 

AWS Devops Interview Questions # 39) How do you configure my project dashboard?

Answer # Project dashboards can be configured to show the tiles you want, where you want them; To add or remove tiles, click on the “Tiles” drop‑down on your project dashboard. To change the layout of your project dashboard, drag the tile to your desired position.

 

AWS Devops Interview Questions # 40) Are there any third party integrations that we can use with AWS CodeStar?

Answer # AWS CodeStar works with Atlassian JIRA to integrate issue management with your projects.

Amazon Devops Engineer Interview Questions

Question # 41) Can we use AWS CodeStar to help manage my existing AWS applications?

Answer # No. AWS CodeStar helps customers quickly start new software projects on AWS. Each CodeStar project includes development tools, including AWS CodePipeline, AWS CodeCommit, AWS CodeBuild and AWS CodeDeploy, that can be used on their own and with existing AWS applications.

 

Question # 42) Why AWS DevOps Matters?

Answer # Software and the Internet have transformed the world and its industries, from shopping to entertainment to banking. Software no longer merely supports a business; rather it becomes an integral component of every part of a business.

Companies interact with their customers through software delivered as online services or applications and on all sorts of devices. They also use software to increase operational efficiencies by transforming every part of the value chain, such as logistics, communications, and operations.

In a similar way that physical goods companies transformed how they design, build, and deliver products using industrial automation throughout the 20th century, companies in today’s world must transform how they build and deliver software.

 

Question # 43) How to Adopt a AWS DevOps Model?

Answer # Transitioning to DevOps requires a change in culture and mindset. At its simplest, DevOps is about removing the barriers between two traditionally siloed teams, development and operations.

In some organizations, there may not even be separate development and operations teams; engineers may do both. With DevOps, the two teams work together to optimize both the productivity of developers and the reliability of operations.

They strive to communicate frequently, increase efficiencies, and improve the quality of services they provide to customers. They take full ownership for their services, often beyond where their stated roles or titles have traditionally been scoped by thinking about the end customer’s needs and how they can contribute to solving those needs.

Quality assurance and security teams may also become tightly integrated with these teams. Organizations using a DevOps model, regardless of their organizational structure, have teams that view the entire development and infrastructure lifecycle as part of their responsibilities.

 

Question # 44) What are DevOps Practices?

Answer # There are a few key practices that help organizations innovate faster through automating and streamlining the software development and infrastructure management processes. Most of these practices are accomplished with proper tooling.

  • One fundamental practice is to perform very frequent but small updates. This is how organizations innovate faster for their customers.
  • These updates are usually more incremental in nature than the occasional updates performed under traditional release practices.
  • Frequent but small updates make each deployment less risky. They help teams address bugs faster because teams can identify the last deployment that caused the error.
  • Although the cadence and size of updates will vary, organizations using a DevOps model deploy updates much more often than organizations using traditional software development practices.
  • Organizations might also use a microservices architecture to make their applications more flexible and enable quicker innovation. The microservices architecture decouples large, complex systems into simple, independent projects.
  • Applications are broken into many individual components (services) with each service scoped to a single purpose or function and operated independently of its peer services and the application as a whole.
  • This architecture reduces the coordination overhead of updating applications, and when each service is paired with small, agile teams who take ownership of each service, organizations can move more quickly.

However, the combination of microservices and increased release frequency leads to significantly more deployments which can present operational challenges.

Thus, DevOps practices like continuous integration and continuous delivery solve these issues and let organizations deliver rapidly in a safe and reliable manner.

Infrastructure automation practices, like infrastructure as code and configuration management, help to keep computing resources elastic and responsive to frequent changes.

In addition, the use of monitoring and logging helps engineers track the performance of applications and infrastructure so they can react quickly to problems.

Together, these practices help organizations deliver faster, more reliable updates to their customers. Here is an overview of important DevOps practices.

Best AWS Devops Interview Questions

Question # 45) What is Continuous Integration in AWS Devops?

Answer # Continuous integration is a software development practice where developers regularly merge their code changes into a central repository, after which automated builds and tests are run. The key goals of continuous integration are to find and address bugs quicker, improve software quality, and reduce the time it takes to validate and release new software updates.

 

Question # 46) What is Continuous Delivery in AWs Devops?

Answer # Continuous delivery is a software development practice where code changes are automatically built, tested, and prepared for a release to production.

It expands upon continuous integration by deploying all code changes to a testing environment and/or a production environment after the build stage. When continuous delivery is implemented properly, developers will always have a deployment-ready build artifact that has passed through a standardized test process.

 

Question # 47) What are Microservices in AWS Devops?

Answer # The microservices architecture is a design approach to build a single application as a set of small services. Each service runs in its own process and communicates with other services through a well-defined interface using a lightweight mechanism, typically an HTTP-based application programming interface (API).

Microservices are built around business capabilities; each service is scoped to a single purpose. You can use different frameworks or programming languages to write microservices and deploy them independently, as a single service, or as a group of services.

 

Question # 48) What is Infrastructure as Code in AWS Devops?

Answer # Infrastructure as code is a practice in which infrastructure is provisioned and managed using code and software development techniques, such as version control and continuous integration.

The cloud’s API-driven model enables developers and system administrators to interact with infrastructure programmatically, and at scale, instead of needing to manually set up and configure resources.

Thus, engineers can interface with infrastructure using code-based tools and treat infrastructure in a manner similar to how they treat application code. Because they are defined by code, infrastructure and servers can quickly be deployed using standardized patterns, updated with the latest patches and versions, or duplicated in repeatable ways.

 

Question # 49) What is AWS CloudFormation in AWS Devops?

Answer # AWS CloudFormation is a service that gives developers and businesses an easy way to create a collection of related AWS resources and provision them in an orderly and predictable fashion.

 

Question # 50) How is AWS CloudFormation different from AWS Elastic Beanstalk?

These services are designed to complement each other. AWS Elastic Beanstalk provides an environment to easily deploy and run applications in the cloud.

It is integrated with developer tools and provides a one-stop experience for you to manage the lifecycle of your applications. AWS CloudFormation is a convenient provisioning mechanism for a broad range of AWS resources.

It supports the infrastructure needs of many different types of applications such as existing enterprise applications, legacy applications, applications built using a variety of AWS resources and container-based solutions (including those built using AWS Elastic Beanstalk).

RELATED INTERVIEW QUESTIONS

  1. Apigee Interview Questions
  2. Actimize Interview Questions
  3. Kibana Interview Questions
  4. Nagios Interview Questions
  5. Jenkins Interview Questions
  6. Chef Interview Questions
  7. Puppet Interview Questions
  8. DB2 Interview Questions
  9. AnthillPro Interview Questions
  10. Angular 2 Interview Questions
  11. Hibernate Interview Questions
  12. ASP.NET Interview Questions
  13. PHP Interview Questions
  14. Kubernetes Interview Questions
  15. Docker Interview Questions
  16. CEH Interview Questions
  17. CyberArk Interview Questions
  18. Appian Interview Questions
  19. Drools Interview Questions
  20. Talend Interview Questions
  21. Selenium Interview Questions
  22. Ab Initio Interview Questions
  23. AB Testing Interview Questions
  24. Mobile Application Testing Interview Questions
  25. Pega Interview Questions
  26. UI Developer Interview Questions
  27. Tableau Interview Questions
  28. SAP ABAP Interview Questions
  29. Reactjs Interview Questions
  30. UiPath Interview Questions

Advanced Kibana Interview Questions And Answers

Top 15 Kibana Interview Questions And Answers For Experienced 2018. Here coding compiler presenting a list of 15 Elasticsearch Kibana interview questions with answers. These interview questions on Kibana ELK will help you to crack your next Kibana job interview. All the very best and happy learning.

Kibana Interview Questions

  1. What is kibana?
  2. What is Kibana used for?
  3. What is Elasticsearch Logstash Kibana?
  4. What is the Filebeat?
  5. What is the elastic stack?
  6. What are the main components on Kibana interface?
  7. What is Kibana Discover interface?
  8. What is Kibana Visualize interface?
  9. What is Kibana Dashboard?
  10. How to create Kibana Dashboard?
  11. What are Kibana Settings?
  12. Is Elasticsearch a Nosql DB?
  13. What is Kibana Docker Image?
  14. What is Kibana Port?
  15. What is kibana.yml?

You Might Be Interested In – Elasticsearch Interview Questions

Kibana Interview Questions And Answers

Kibana Interview Questions
Kibana is anOpen source software
Kibana is aData visualization plugin for Elasticsearch
Kibana providesData visualization capabilities on top of the content indexed on an Elasticsearch cluster
Kibana ELK StackThe combination of Elasticsearch, Logstash, and Kibana, referred to as the Elastic Stack
Kibana LicenceApache License
Kibana has written inJavascript

Kibana Interview Questions # 1) What is Kibana?

A) Kibana is an open source data visualization plugin for Elasticsearch. It provides visualization capabilities on top of the content indexed on an Elasticsearch cluster.

 

Kibana Interview Questions # 2) What is Kibana used for?

A) Logstash is an open source tool for collecting, parsing, and storing logs for future use. Kibana 3 is a web interface that can be used to search and view the logs that Logstash has indexed. Both of these tools are based on Elasticsearch. Elasticsearch, Logstash, and Kibana, when used together is known as an ELK stack.

 

Kibana Interview Questions # 3) What is Elasticsearch Logstash Kibana?

A) The ELK stack consists of Elasticsearch, Logstash, and Kibana. Although they’ve all been built to work exceptionally well together, each one is a separate project that is driven by the open-source vendor Elastic—which itself began as an enterprise search platform vendor.

 

Kibana Interview Questions # 4) What is the Filebeat?

A) Filebeat is a log data shipper for local files. Installed as an agent on your servers, Filebeat monitors the log directories or specific log files, tails the files, and forwards them either to Elasticsearch or Logstash for indexing.

 

Kibana Interview Questions # 5) What is the elastic stack?

A) Elastic Stack is a group of open source products from Elastic designed to help users take data from any type of source and in any format and search, analyze, and visualize that data in real time.

 

Kibana Interview Questions # 6) What are the main components on Kibana interface?

A) The Kibana interface is divided into four main sections:

  • Discover
  • Visualize
  • Dashboard
  • Settings

 

Kibana Interview Questions # 7) What is Kibana Discover interface?

A) When you first connect to Kibana 4, you will be taken to the Discover page. By default, this page will display all of your ELK stack’s most recently received logs.

 

Kibana Interview Questions # 8) What is Kibana Visualize interface?

A) The Kibana Visualize page is where you can create, modify, and view your own custom visualizations. There are several different types of visualizations, ranging from Vertical bar and Pie charts to Tile maps (for displaying data on a map) and Data tables.

 

Kibana Interview Questions # 9) What is Kibana Dashboard?

A) The Kibana Dashboard page is where you can create, modify, and view your own custom dashboards. With a dashboard, you can combine multiple visualizations onto a single page, then filter them by providing a search query or by selecting filters by clicking elements in the visualization. Dashboards are useful for when you want to get an overview of your logs, and make correlations among various visualizations and logs.

 

Kibana Interview Questions # 10) How to create Kibana Dashboard?

A) To create a Kibana dashboard, first, click the Dashboard menu item.

Here is a breakdown of the steps that are being performed:

  • Clicked Add Visualization icon
  • Added “Log Counts” pie chart and “Nginx: Top 10 client IP” histogram
  • Collapsed the Add Visualization menu
  • Rearranged and resized the visualizations on the dashboard
  • Clicked Save Dashboard icon
  • Choose a name for your dashboard before saving it.

Kibana Elasticsearch Interview Questions

Kibana Interview Questions # 11) What are Kibana Settings?

A) The Kibana Settings page lets you change a variety of things like default values or index patterns. In this tutorial, we will keep it simple and focus on the Indices and Objects sections.

 

Kibana Interview Questions # 12) Is Elasticsearch a Nosql DB?

A) Elasticsearch is a full-text, distributed NoSQL database. In other words, it uses documents rather than schema or tables. It’s a free, open source tool that allows for real-time searching and analyzing of your data.

 

Kibana Interview Questions # 13) What is Kibana Docker Image?

A) The images are available in two different configurations or “flavors”. The x-pack flavor, which is the default, ships with X-Pack features pre-installed. The oss flavor does not include X-Pack, and contains only open source Kibana.

You can download Kibana docker image at: https://github.com/elastic/kibana-docker

 

Kibana Interview Questions # 14) What is Kibana Port?

A) The default settings configure Kibana to run on localhost:5601 . To change the host or port number, or connect to Elasticsearch running on a different machine, you’ll need to update your kibana.yml file. You can also enable SSL and set a variety of other options.

 

Kibana Interview Questions # 15) What is kibana.yml?

A) The Kibana server reads properties from the kibana.yml file on startup. To change the host or port number, or connect to Elasticsearch running on a different machine, you’ll need to update your kibana.yml file. You can also enable SSL and set a variety of other options.

RELATED INTERVIEW QUESTIONS

  1. Nagios Interview Questions
  2. Jenkins Interview Questions
  3. Chef Interview Questions
  4. Puppet Interview Questions
  5. DB2 Interview Questions
  6. AnthillPro Interview Questions
  7. Angular 2 Interview Questions
  8. Hibernate Interview Questions
  9. ASP.NET Interview Questions
  10. PHP Interview Questions
  11. Kubernetes Interview Questions
  12. Docker Interview Questions
  13. CEH Interview Questions
  14. CyberArk Interview Questions
  15. Appian Interview Questions
  16. Drools Interview Questions
  17. Talend Interview Questions
  18. Selenium Interview Questions
  19. Ab Initio Interview Questions
  20. AB Testing Interview Questions
  21. Mobile Application Testing Interview Questions
  22. Pega Interview Questions
  23. UI Developer Interview Questions
  24. Tableau Interview Questions
  25. SAP ABAP Interview Questions
  26. Reactjs Interview Questions
  27. UiPath Interview Questions
  28. Automation Anywhere Interview Questions
  29. RPA Interview Questions
  30. RPA Blue Prism Interview Questions

Advanced Nagios Interview Questions And Answers

Top 30 Nagios Interview Questions And Answers For Experienced 2018. If you are searching for interview questions on Nagios, then your search ends here. In this blog post coding compiler presenting a list of 30 Nagios Devops interview questions. We hope that these Nagios questions will help you to crack your next DevOps job interview. All the best for your future and happy learning.

Nagios Interview Questions

  1. What is Nagios?
  2. What is Nagios monitoring tool in Linux?
  3. What is an icinga?
  4. What is active and passive checks in Nagios?
  5. What is OID Nagios?
  6. What does Nagios use to monitor?
  7. What does Check_mk do?
  8. What is icinga2?
  9. What is a plugin in Nagios?
  10. Can Nagios monitor Windows machine?
  11. What is Nrpe in Nagios?
  12. What is Nagios XI?
  13. What are the benefits of using Nagios?
  14. What is Active Check?
  15. What is Nagios Log Server?
  16. What is Nagios Network Analyzer?
  17. Explain the process of website Monitoring With Nagios?
  18. What are the benefits of website monitoring with Nagios?
  19. What are the benefits of HTTP monitoring with Nagios?
  20. What are the benefits of SSL Certificate Monitoring With Nagios?
  21. What are the benefits of Database Monitoring with Nagios?
  22. Which databases supports Nagios?
  23. Nagios supports which protocol monitoring?
  24. What are the benefits of Operating System (OS) Monitoring with Nagios?
  25. Nagios supports which OS Monitoring?
  26. What are the benefits of Cloud Computing And Cloud Monitoring With Nagios?
  27. Explain Virtualization With Nagios?
  28. Explain Application Server Monitoring With Nagios?
  29. Explain Storage Monitoring With Nagios?
  30. Explain Log Monitoring and Management with Nagios?

Nagios Interview Questions And Answers

Nagios Interview Questions # 1) What is Nagios?

A) Nagios is a open source powerful monitoring system that enables organizations to identify and resolve IT infrastructure problems before they affect critical business processes.

 

Nagios Interview Questions # 2) What is Nagios monitoring tool in Linux?

A) Nagios provides complete monitoring of Linux operating systems and distributions – including operating system metrics, service state, process state, file system usage, and more. When you use Nagios to monitor your Linux environment, you’re using one of the most powerful Linux monitoring tools on the planet.

 

Nagios Interview Questions # 3) What is an icinga?

A) Icinga is an open source computer system and network monitoring application. It was originally created as a fork of the Nagios system monitoring application in 2009. The name Icinga is a Zulu word meaning “it looks for”, “it browses” or “it examines” and is pronounced with a click consonant.

 

Nagios Interview Questions # 4) What is active and passive checks in Nagios?

A) Active checks can be used to “poll” a device or service for status information every so often. Nagios also supports a way to monitor hosts and services passively instead of actively. The key features of passive checks are as follows: Passive checks are initiated and performed by external applications/processes.

 

Nagios Interview Questions # 5) What is OID Nagios?

A) SNMP (Simple Network Management Protocol) is a network protocol designed for monitoring network-attached devices. It uses OIDs (Object IDentifiers) for defining the information, known as MIBs (Management Information Base), that can be monitored.

Nagios Basic Interview Questions

Nagios Interview Questions # 6) What does Nagios use to monitor?

A) Nagios is now known as Nagios Core, is a free and open source computer-software application that monitors systems, networks and infrastructure. Nagios offers monitoring and alerting services for servers, switches, applications and services.

 

Nagios Interview Questions # 7) What does Check_mk do?

A) Check_MK is an extension to the Nagios monitoring system that allows creating rule-based configuration using Python and offloading work from the Nagios core to make it scale better, allowing more systems to be monitored from a single Nagios server.

 

Nagios Interview Questions # 8) What is icinga2?

A) Icinga 2 is an open source monitoring system which checks the availability of your network resources, notifies users of outages, and generates performance data for reporting. Scalable and extensible, Icinga 2 can monitor large, complex environments across multiple locations.

 

Nagios Interview Questions # 9) What is a plugin in Nagios?

A) Plugins are compiled executables or scripts (Perl scripts, shell scripts, etc.) that can be run from a command line to check the status or a host or service. Nagios uses the results from plugins to determine the current status of hosts and services on your network.

 

Nagios Interview Questions # 10) Can Nagios monitor Windows machine?

A) To monitor Windows Machines you will need to follow several steps and they are: Install NSClient++ addon on the Windows Machine. Configure Nagios Server for monitoring Windows Machine. Add new host and service definitions for Windows machine monitoring.

Nagios XI Interview Questions

Nagios Interview Questions # 11) What is Nrpe in Nagios?

A) NRPE allows you to remotely execute Nagios plugins on other Linux/Unix machines. This allows you to monitor remote machine metrics (disk usage, CPU load, etc.). NRPE can also communicate with some of the Windows agent addons, so you can execute scripts and check metrics on remote Windows machines as well.

 

Nagios Interview Questions # 12) What is Nagios XI?

A) Nagios XI provides monitoring of all mission-critical infrastructure components including applications, services, operating systems, network protocols, systems metrics, and network infrastructure. Hundreds of third-party addons provide for monitoring of virtually all in-house and external applications, services, and systems.

Nagios Interview Questions # 13) What are the benefits of using Nagios?

A) There are many benefits of using Nagios:

  • Plan for infrastructure upgrades before outdated systems cause failures
  • Respond to issues at the first sign of a problem
  • Automatically fix problems when they are detected
  • Coordinate technical team responses
  • Ensure your organization’s SLAs are being met
  • Ensure IT infrastructure outages have a minimal effect on your organization’s bottom line
  • Monitor your entire infrastructure and business processes

 

Nagios Interview Questions # 14) What is Active Check?

A) A check that is initiated and performed by Nagios Core or Nagios XI – usually on a pre-determined schedule. Plugins are used to perform active checks.

Nagios Server Interview Questions

Nagios Interview Questions # 15) What is Nagios Log Server?

A) Nagios Log Server greatly simplifies the process of searching your log data. Set up alerts to notify you when potential threats arise, or simply query your log data to quickly audit any system. With Nagios Log Server, you get all of your log data in one location, with high availability and fail-over built right in.

 

Nagios Interview Questions # 16) What is Nagios Network Analyzer?

A) Nagios Network Analyzer provides an in-depth look at all network traffic sources and potential security threats allowing system admins to quickly gather high-level information regarding the health of the network as well as highly granular data for complete and thorough network analysis using netflow, sflow, jflow, etc.

 

Nagios Interview Questions # 17) Explain the process of website Monitoring With Nagios?

A) Nagios provides complete monitoring of websites, web applications, web transactions, and web services – including availability, URL monitoring, HTTP status, content monitoring, hijack detection, and more.

 

Nagios Interview Questions # 18) What are the benefits of website monitoring with Nagios?

A) Implementing effective website monitoring with Nagios offers the following benefits:

  • Increased website and web application availability
  • Increased website performance
  • Fast detection of outages, website defacement, and website hijacking
  • Capacity planning information for future web server and application upgrades

Nagios Interview Questions # 19) What are the benefits of HTTP monitoring with Nagios?

A) Nagios provides complete monitoring of HTTP and HTTPS servers and protocols.

Benefits – Implementing effective HTTP monitoring with Nagios offers the following benefits:

  • Increased server, services, and application availability
  • Fast detection of network outages and protocol failures
  • User experience monitoring
  • Web server performance monitoring
  • Web transaction monitoring
  • URL monitoring

Nagios Server Interview Questions

Nagios Interview Questions # 20) What are the benefits of SSL Certificate Monitoring With Nagios?

A) Nagios provides SSL Certificate monitoring to ensure that expired certificates don’t negatively impact your organization’s websites, applications, and security.

Benefits – Implementing effective SSL Certificate monitoring with Nagios offers the following benefits:

  • Increased website and application availability
  • Increased security

 

Nagios Interview Questions # 21) What are the benefits of Database Monitoring with Nagios?

A) Nagios provides complete monitoring of database servers and databases – including availability, database and table sizes, cache ratios, and other key metrics.

Benefits – Implementing effective database monitoring with Nagios offers the following benefits:

  • Increased application availability
  • Increased database performance
  • Fast detection of database outages, failures, and table corruption
  • Predictive analysis of storage requirements and index performance

 

Nagios Interview Questions # 22) Which databases supports Nagios?

A) Nagios supports following databases for monitoring.

  1. MySQL
  2. Postgres
  3. Oracle
  4. DB2 Monitoring
  5. Microsoft SQL Server

 

Nagios Interview Questions # 23) Nagios supports which protocol monitoring?

A) Nagios supports following Protocol Monitoring:

  • HTTP Monitoring
  • DNS Monitoring
  • FTP Monitoring
  • SNMP Monitoring
  • SMTP Monitoring
  • SSH Monitoring
  • LDAP Monitoring
  • IMAP Monitoring
  • POP Monitoring
  • ICMP Monitoring
  • DHCP Monitoring
  • IPMI Monitoring

 

Nagios Interview Questions # 24) What are the benefits of Operating System (OS) Monitoring with Nagios?

A) Nagios provides complete monitoring of desktop and server operating systems – including system metrics, service states, process states, performance counters, event logs, applications (IIS, Exchange, Apache, MySQL, etc), and services (Active Directory, DHCP, Sendmail, etc).

Benefits: Implementing effective operating system monitoring with Nagios offers the following benefits:

  • Increased server, services, and application availability
  • Fast detection of network outages and protocol failures
  • Fast detection of failed services, processes and batch jobs

 

Nagios Interview Questions # 25) Nagios supports which OS Monitoring?

A) Nagios supports following operating system monitoring:

  • Windows Monitoring
  • Linux Monitoring
  • UNIX Monitoring
  • Solaris Monitoring
  • AIX Monitoring
  • HP-UX Monitoring
  • RHEL Monitoring
  • Ubuntu Monitoring
  • Debian Monitoring
  • CentOS Monitoring
  • Fedora Monitoring
  • SuSE Monitoring

 

Nagios Interview Questions # 26) What are the benefits of Cloud Computing And Cloud Monitoring With Nagios?

A) Nagios provides complete monitoring of cloud computing, web, and storage services. Nagios is capable of monitoring a variety of servers and operating systems – both physical and virtual.

Benefits – Implementing effective cloud monitoring with Nagios offers the following benefits:

  • Increased server, services, and application availability
  • Fast detection of network outages
  • Fast detection of cloud computing environment problems

 

Nagios Interview Questions # 27) Explain Virtualization With Nagios?

A) Nagios provides the capabilities to monitor an assortment of metrics on many different virtualization platforms. In addition, Nagios can be run from several different virtualization platforms such as VMware, Microsoft Virtual PC, Xen, Amazon EC2, etc. Nagios had pre-built VM’s for both Nagios Core and Nagios XI created for VMware, as well as Virtual PC and OFV Template for Nagios XI.

Benefits: Implementing effective virtualization monitoring with Nagios offers the following benefits:

  • Increased server, services, and application availability
  • Fast detection of server and operating system failures
  • Fast detection of service and application failures
  • Reduced deployment time
  • Reduced administrative overhead
  • Centralized configuration
  • Ability to monitor the following Metrics

CPU Usage, Memory, Networking, Input / Output, Datastore usage, VM Status, Services, More…

 

Nagios Interview Questions # 28) Explain Application Server Monitoring With Nagios?

A) Nagios provides complete monitoring of application servers – including JBOSS, Websphere, Weblogic, ActiveMQ, and Tomcat.

Benefits: Implementing effective application server monitoring with Nagios offers the following benefits:

  • Increased server, services, and application availability
  • Fast detection of network outages and protocol failures
  • Fast detection of failed process, services and batch jobs

 

Nagios Interview Questions # 29) Explain Storage Monitoring With Nagios?

A) Nagios provides complete monitoring of storage systems – including directory size, disk usage, file count, file presence, file size, S.M.A.R.T. status, RAID array status, and more.

Benefits: Implementing effective storage monitoring with Nagios offers the following benefits:

  • Detection of failed batch jobs
  • Advanced planning for system upgrades
  • Fast detection of storage subsystem problems
  • Early detection of potential future failures
  • Reduced risk of unexpected downtime

 

Nagios Interview Questions # 30) Explain Log Monitoring and Management with Nagios?

A) Nagios provides complete monitoring and log management of application logs, log files, event logs, service logs, and system logs on Windows servers, Linux servers, and Unix servers. Nagios is capable of monitoring system logs, application logs, log files, and syslog data, and alerting you when a log pattern is detected.

Benefits: Implementing effective log monitoring with Nagios offers the following benefits:

  • Increased security
  • Increased awareness of network infrastructure problems
  • Increased server, services, and application availability
  • Fast detection of network outages and protocol failures
  • Fast detection of failed processes, services, cron jobs, and batch jobs
  • Audit compliance and regulatory compliance

RELATED INTERVIEW QUESTIONS

  1. Jenkins Interview Questions
  2. Chef Interview Questions
  3. Puppet Interview Questions
  4. DB2 Interview Questions
  5. AnthillPro Interview Questions
  6. Angular 2 Interview Questions
  7. Hibernate Interview Questions
  8. ASP.NET Interview Questions
  9. PHP Interview Questions
  10. Kubernetes Interview Questions
  11. Docker Interview Questions
  12. CEH Interview Questions
  13. CyberArk Interview Questions
  14. Appian Interview Questions
  15. Drools Interview Questions
  16. Talend Interview Questions
  17. Selenium Interview Questions
  18. Ab Initio Interview Questions
  19. AB Testing Interview Questions
  20. Mobile Application Testing Interview Questions
  21. Pega Interview Questions
  22. UI Developer Interview Questions
  23. Tableau Interview Questions
  24. SAP ABAP Interview Questions
  25. Reactjs Interview Questions
  26. UiPath Interview Questions
  27. Automation Anywhere Interview Questions
  28. RPA Interview Questions
  29. RPA Blue Prism Interview Questions
  30. Ranorex Interview Questions

Advanced Jenkins Interview Questions And Answers 2018

Top 28 Jenkins Interview Questions And Answers For Experienced 2018. If you are looking for Jenkins interview questions with answers, then you are at right place. Here coding  compiler sharing a list of 28 real-time interview questions on Jenkins. These Jenkins interview questions for devops will help you to crack your next Jenkins job interview. Happy reading and all the best for your future.

Jenkins Interview Questions

  1. What is Jenkins?
  2. Why do we use Jenkins?
  3. What is Maven and what is Jenkins?
  4. What is the difference between Hudson and Jenkins?
  5. What is meant by continuous integration in Jenkins?
  6. Why do we use Jenkins with selenium?
  7. What are CI Tools?
  8. What is a CI CD pipeline?
  9. What is build pipeline in Jenkins?
  10. What is a Jenkins pipeline?
  11. What is a DSL Jenkins?
  12. What is continuous integration and deployment?
  13. What is the tool used for provisioning and configuration?
  14. What is the difference between Maven, Ant, and Jenkins?
  15. Which SCM tools Jenkins supports?
  16. How schedule a build in Jenkins?
  17. Why do we use Pipelines in Jenkins?
  18. What is a Jenkinsfile?
  19. How do you create Multibranch Pipeline in Jenkins?
  20. What is the blue ocean in Jenkins?
  21. What are the important plugins in Jenkins?
  22. What are Jobs in Jenkins?
  23. How do you create a Job in Jenkins?
  24. How do you configuring automatic builds in Jenkins?
  25. How to create a backup and copy files in Jenkins?

Jenkins Interview Questions And Answers

Jenkins Interview Questions
Jenkins is anOpen source software
Jenkins is anAutomation server
Jenkins canHelp to automate the software development process.
Jenkins canAutomate the process with continuous integration and facilitate technical aspects of continuous delivery.
Jenkins developed byJenkins is a fork of a project called Hudson.
Jenkins LicenseMIT
Jenkins has written inJava

Jenkins Interview Questions # 1) What is Jenkins?

Answer # Jenkins is an open source automation server. Jenkins is a continuous integration tool developed in Java. Jenkins helps to automate the non-human part of software development process, with continuous integration and facilitating technical aspects of continuous delivery.

 

Jenkins Interview Questions # 2) Why do we use Jenkins?

Answer # Jenkins is an open-source continuous integration software tool written in the Java programming language for testing and reporting on isolated changes in a larger code base in real time. The Jenkins software enables developers to find and solve defects in a code base rapidly and to automate testing of their builds.

 

Jenkins Interview Questions # 3) What is Maven and what is Jenkins?

Answer # Maven is a build tool, in short a successor of ant. It helps in build and version control. However, Jenkins is continuous integration system, where in maven is used for build. Jenkins can be used to automate the deployment process.

 

Jenkins Interview Question # 4) What is the difference between Hudson and Jenkins?

Answer # Jenkins is the new Hudson. It really is more like a rename, not a fork, since the whole development community moved to Jenkins. (Oracle is left sitting in a corner holding their old ball “Hudson“, but it’s just a soul-less project now.). In a nutshell Jenkins CI is the leading open-source continuous integration server.

 

Jenkins Interview Questions # 5) What is meant by continuous integration in Jenkins?

Answer # Continuous integration is a process in which all development work is integrated as early as possible. The resulting artifacts are automatically created and tested. This process allows to identify errors as early as possible. Jenkins is a popular open source tool to perform continuous integration and build automation.

Interview Questions on Jenkins

 

Continuous Integration Interview Questions # 6) Why do we use Jenkins with selenium?

Answer # Running Selenium tests in Jenkins allows you to run your tests every time your software changes and deploy the software to a new environment when the tests pass. Jenkins can schedule your tests to run at specific time.

 

Jenkins CI CD Interview Questions # 7) What are CI Tools?

Answer # Here is the list of the top 8 Continuous Integration tools:

  • Jenkins
  • TeamCity
  • Travis CI
  • Go CD
  • Bamboo
  • GitLab CI
  • CircleCI
  • Codeship

 

Jenkins Pipeline Interview Questions # 8) What is a CI CD pipeline?

Answer # A continuous integration and deployment pipeline (CD/CI) is such an important aspect of a software project. It saves a ton of manual, error-prone deployment work. It results in higher quality software for continuous integration, automated tests, and code metrics.

 

Jenkins Tough Interview Questions # 9) What is build pipeline in Jenkins?

Answer # Job chaining in Jenkins is the process of automatically starting other job(s) after the execution of a job. This approach lets you build multi-step build pipelines or trigger the rebuild of a project if one of its dependencies is updated.

 

Jenkin Interview Questions # 10) What is a Jenkins pipeline?

Answer # The Jenkins Pipeline plugin is a game changer for Jenkins users. Based on a Domain Specific Language (DSL) in Groovy, the Pipeline plugin makes pipelines scriptable and it is an incredibly powerful way to develop complex, multi-step DevOps pipelines.

Jenkins Interview Questions And Answers For Experienced

Jenkins Interview Questions # 11) What is a DSL Jenkins?

Answer # The Jenkins “Job DSL / Plugin” is made up of two parts: The Domain Specific Language (DSL) itself that allows users to describe jobs using a Groovy-based language, and a Jenkins plugin which manages the scripts and the updating of the Jenkins jobs which are created and maintained as a result.

 

Jenkins Interview Questions For Devops # 12) What is continuous integration and deployment?

Answer # Continuous Integration (CI) is a development practice that requires developers to integrate code into a shared repository several times a day. Each check-in is then verified by an automated build, allowing teams to detect problems early.

 

Jenkins Real Time Interview Questions # 13) What is the tool used for provisioning and configuration?

Answer # Ansible is an agent-less configuration management as well as orchestration tool. In Ansible, the configuration modules are called “Playbooks”. Like other tools, Ansible can be used for cloud provisioning.

 

Jenkins Questions And Answers # 14) What is the difference between Maven, Ant and Jenkins?

Answer # Maven and ANT are build tool but main difference is that maven also provides dependency management, standard project layout and project management. On difference between Maven, ANT and Jenkins, later is a continuous integration tool which is much more than build tool.

 

Jenkins Questions # 15) Which SCM tools Jenkins supports?

Answer # Jenkins supports version control tools, including AccuRev, CVS, Subversion, Git, Mercurial, Perforce, ClearCase and RTC, and can execute Apache Ant, Apache Maven and sbt based projects as well as arbitrary shell scripts and Windows batch commands.

Jenkins Interview Questions For Testers

Devops Interview Questions Jenkins # 16) How schedule a build in Jenkins?

Answer # In Jenkins, under the job configuration we can define various build triggers. Simple find the ‘Build Triggers’ section, and check the ‘ Build Periodically’ checkbox. With the periodically build you can schedule the build definition by the date or day of the week and the time to execute the build.

The format of the ‘Schedule’ textbox is as follows:

MINUTE (0-59), HOUR (0-23), DAY (1-31), MONTH (1-12), DAY OF THE WEEK (0-7)

 

Continuous Integration Interview Questions # 17) Why do we use Pipelines in Jenkins?

Answer # Pipeline adds a powerful set of automation tools onto Jenkins, supporting use cases that span from simple continuous integration to comprehensive continuous delivery pipelines. By modeling a series of related tasks, users can take advantage of the many features of Pipeline:

  • Code: Pipelines are implemented in code and typically checked into source control, giving teams the ability to edit, review, and iterate upon their delivery pipeline.
  • Durable: Pipelines can survive both planned and unplanned restarts of the Jenkins master.
  • Pausable: Pipelines can optionally stop and wait for human input or approval before continuing the Pipeline run.
  • Versatile: Pipelines support complex real-world continuous delivery requirements, including the ability to fork/join, loop, and perform work in parallel.
  • Extensible: The Pipeline plugin supports custom extensions to its DSL and multiple options for integration with other plugins.

Questions on Jenkins # 18) What is a Jenkinsfile?

Answer # A Jenkinsfile is a text file that contains the definition of a Jenkins Pipeline and is checked into source control.

Creating a Jenkinsfile, which is checked into source control, provides a number of immediate benefits:

  1. Code review/iteration on the Pipeline
  2. Audit trail for the Pipeline
  3. Single source of truth for the Pipeline, which can be viewed and edited by multiple members of the project.

Interview Questions on Jenkins # 19) How do you create Multibranch Pipeline in Jenkins?

Answer # The Multibranch Pipeline project type enables you to implement different Jenkinsfiles for different branches of the same project. In a Multibranch Pipeline project, Jenkins automatically discovers, manages and executes Pipelines for branches which contain a Jenkinsfile in source control.

 

Devops Jenkins Interview Questions # 20) What is blue ocean in Jenkins?

Answer # Blue Ocean is a project that rethinks the user experience of Jenkins, modelling and presenting the process of software delivery by surfacing information that’s important to development teams with as few clicks as possible, while still staying true to the extensibility that is core to Jenkins.

Jenkins Interview Questions For Automation Testers

 

Jenkins Interview Questions For DevOps # 21) What are the important plugins in Jenkins?

Answers # Here is the list of some important Plugins in Jenkins:

  1. Maven 2 project
  2. Git
  3. Amazon EC2
  4. HTML publisher
  5. Copy artifact
  6. Join
  7. Green Balls

 

Interview Questions on Maven and Jenkins # 22) What are Jobs in Jenkins?

Answer # Jenkins can be used to perform the typical build server work, such as doing continuous/official/nightly builds, run tests, or perform some repetitive batch tasks. This is called “free-style software project” in Jenkins.

 

Jenkins Advanced Interview Questions # 23) How do you create a Job in Jenkins?

Answer # Go to Jenkins top page, select “New Job”, then choose “Build a free-style software project”. This job type consists of the following elements:

optional SCM, such as CVS or Subversion where your source code resides.
optional triggers to control when Jenkins will perform builds.

some sort of build script that performs the build (ant, maven, shell script, batch file, etc.) where the real work happens optional steps to collect information out of the build, such as archiving the artifacts and/or recording javadoc and test results.

optional steps to notify other people/systems with the build result, such as sending e-mails, IMs, updating issue tracker, etc.

 

Selenium Jenkins Interview Questions # 24) How do you configuring automatic builds in Jenkins?

Answer # Builds in Jenkins can be triggered periodically (on a schedule, specified in configuration), or when source changes in the project have been detected, or they can be automatically triggered by requesting the URL:

http://YOURHOST/jenkins/job/PROJECTNAME/build

 

Jenkins CI Interview Questions And Answers # 25) How to create a backup and copy files in Jenkins?

Answer # To create a backup, all you need to do is to periodically back up your JENKINS_HOME directory. This contains all of your build jobs configurations, your slave node configurations, and your build history. To create a back-up of your Jenkins setup, just copy this directory.

Jenkins Real-Time Interview Questions

26) What is the trustAnchors parameter must be non-empty error and how can you solve it?

A) This trustAnchors parameter must be non-empty error means that the truststore you specified was not found, or couldn’t be opened due to access permissions for example.

EJP basically answered the question (and I realize this has an accepted answer) but I just dealt with this edge-case gotcha and wanted to immortalize my solution. I had the InvalidAlgorithmParameterException error on a hosted jira server that I had previously set up for SSL-only access.

The issue was that I had set up my keystore in the PKCS#12 format, but my truststore was in the JKS format. In my case, I had edited my server.xml file to specify the keystoreType to PKCS, but did not specify the truststoreType, so it defaults to whatever the keystoreType is. Specifying the truststoreType explicitly as JKS solved it for me.

27) What are the feature differences between Jenkins and Hudson?

A) Jenkins is the recent fork by the core developers of Hudson. To understand why, you need to know the history of the project. It was originally open source and supported by Sun. Like much of what Sun did, it was fairly open, but there was a bit of benign neglect. The source, trackers, website, etc. were hosted by Sun on their relatively closed java.net platform.

Then Oracle bought Sun. For various reasons Oracle has not been shy about leveraging what it perceives as its assets. Those include some control over the logistic platform of Hudson, and particularly control over the Hudson name. Many users and contributors weren’t comfortable with that and decided to leave.

So it comes down to what Hudson vs Jenkins offers. Both Oracle’s Hudson and Jenkins have the code. Hudson has Oracle and Sonatype’s corporate support and the brand. Jenkins has most of the core developers, the community, and (so far) much more actual work.

In fact, arguably it was Oracle who did the forking! And technically, too, that’s kinda what happened.

It’s interesting to see what comes out of “Hudson” though. While the “Winston summarizes the state and rosy future of the Hudson project” stuff they posted on the (new) Hudson website originally seemed like odd humour to me, perhaps this was a purposeful takeover, and the Sonatype guys actually have some big ideas up their sleeve. This analysis, suggesting a deliberate strategy by Oracle/Sonatype to oust Kohsuke and crew to create a more “enterprisy” Hudson is a very interesting read!

In any case, this brief comparison a fortnight after the split—while not exactly scientific—shows Jenkins to be by far more active of the two projects.

Jenkins has continued the path well-trodden by the original Hudson with frequent releases including many minor updates.

Oracle seems to have largely delegated work on the future path for Hudson to the Sonatype team, who has performed some significant changes, especially with respect to Maven. They have jointly moved it to the Eclipse foundation.

I would suggest that if you like the sound of:

Less frequent releases but ones that are more heavily tested for backwards compatibility (more of an “enterprise-style” release cycle)

A product focused primarily on strong Maven and/or Nexus integration (i.e., you have no interest in Gradle and Artifactory etc)

Professional support offerings from Sonatype or maybe Oracle in preference to Cloudbees etc

You don’t mind having a smaller community of plugin developers etc.
, then I would suggest Hudson.

Conversely, if you prefer:

More frequent updates, even if they require a bit more frequent tweaking and are perhaps slightly riskier in terms of compatibility (more of a “latest and greatest” release cycle)

A system with more active community support for e.g., other build systems / artifact repositories

Support offerings from the original creator et al. and/or you have no interest in professional support (e.g., you’re happy as long as you can get a fix in next week’s “latest and greatest”)

A classical OSS-style witches’ brew of a development ecosystem

then I would suggest Jenkins.

Jenkins CI Interview Questions

28) How to trigger a build remotely from Jenkins? How to configure Git post commit hook?

The requirement is whenever changes are made in the Git repository for a particular project it will automatically start Jenkins build for that project.

A) As mentioned in “Polling must die: triggering Jenkins builds from a git hook”, you can notify Jenkins of a new commit:

With the latest Git plugin 1.1.14 (that I just release now), you can now do this more >easily by simply executing the following command:

curl http://yourserver/jenkins/git/notifyCommit?url=<URL of the Git repository>
This will scan all the jobs that’s configured to check out the specified URL, and if they are also configured with polling, it’ll immediately trigger the polling (and if that finds a change worth a build, a build will be triggered in turn.)

This allows a script to remain the same when jobs come and go in Jenkins.
Or if you have multiple repositories under a single repository host application (such as Gitosis), you can share a single post-receive hook script with all the repositories. Finally, this URL doesn’t require authentication even for secured Jenkins, because the server doesn’t directly use anything that the client is sending. It runs polling to verify that there is a change, before it actually starts a build.

As mentioned here, make sure to use the right address for your Jenkins server:

since we’re running Jenkins as standalone Webserver on port 8080 the URL should have been without the /jenkins, like this:

http://jenkins:8080/git/[email protected]:tools/common.git
To reinforce that last point, ptha adds in the comments:

It may be obvious, but I had issues with:

curl http://yourserver/jenkins/git/notifyCommit?url=<URL of the Git repository>.
The url parameter should match exactly what you have in Repository URL of your Jenkins job.
When copying examples I left out the protocol, in our case ssh://, and it didn’t work.

You can also use a simple post-receive hook like in “Push based builds using Jenkins and GIT”

#!/bin/bash
/usr/bin/curl –user USERNAME:PASS -s \

http://jenkinsci/job/PROJECTNAME/build?token=1qaz2wsx
Configure your Jenkins job to be able to “Trigger builds remotely” and use an authentication token (1qaz2wsx in this example).

However, this is a project-specific script, and the author mentions a way to generalize it.
The first solution is easier as it doesn’t depend on authentication or a specific project.

I want to check in change set whether at least one java file is there the build should start.
Suppose the developers changed only XML files or property files, then the build should not start.

Basically, your build script can:

put a ‘build’ notes (see git notes) on the first call
on the subsequent calls, grab the list of commits between HEAD of your branch candidate for build and the commit referenced by the git notes ‘build’ (git show refs/notes/build): git diff –name-only SHA_build HEAD.
your script can parse that list and decide if it needs to go on with the build.
in any case, create/move your git notes ‘build’ to HEAD.

RELATED INTERVIEW QUESTIONS

  1. Chef Interview Questions
  2. Puppet Interview Questions
  3. DB2 Interview Questions
  4. AnthillPro Interview Questions
  5. Angular 2 Interview Questions
  6. Hibernate Interview Questions
  7. ASP.NET Interview Questions
  8. PHP Interview Questions
  9. Kubernetes Interview Questions
  10. Docker Interview Questions
  11. CEH Interview Questions
  12. CyberArk Interview Questions
  13. Appian Interview Questions
  14. Drools Interview Questions
  15. Talend Interview Questions
  16. Selenium Interview Questions
  17. Ab Initio Interview Questions
  18. AB Testing Interview Questions
  19. Mobile Application Testing Interview Questions
  20. Pega Interview Questions
  21. UI Developer Interview Questions
  22. Tableau Interview Questions
  23. SAP ABAP Interview Questions
  24. Reactjs Interview Questions
  25. UiPath Interview Questions
  26. Automation Anywhere Interview Questions
  27. RPA Interview Questions
  28. RPA Blue Prism Interview Questions
  29. Ranorex Interview Questions
  30. AWS Interview Questions

Top Chef Interview Questions And Answers

Top 82 Chef Interview Questions And Answers For Experienced and Freshers 2018. Here in this blog post coding compiler presenting real-time chef devops interview questions. This list will help you to crack your next chef devops job interview.

Chef Interview Questions

  1. What is chef in devops?
  2. What is chef in automation?
  3. What is chef DK?
  4. What are chef client nodes?
  5. What is a chef server?
  6. What are work stations in chef?
  7. What are Cookbooks in chef?
  8. What is chef repo?
  9. What is chef-client Run?
  10. What is chef validator?
  11. Why do we use SSL Certificates in chef?
  12. What are Signed Headers in chef?
  13. What is SSL_CERT_FILE in chef?
  14. What are Knife Subcommands in chef?
  15. What is knife ssl check command in chef?
  16. What is knife ssl fetch command in chef?
  17. What are Data Bags?
  18. What are recipes in chef?
  19. What is chef resources file?
  20. What is apt_package resource in chef?

Chef Interview Questions And Answers

1) What is chef in devops?

A) Chef is a configuration management tool for dealing with machine setup on physical servers, virtual machines and in the cloud.
Many companies use Chef software to control and manage their infrastructure including Facebook, Etsy, Cheezburger, and Indiegogo.

2) What is chef in automation?

A) Chef is a powerful automation platform that transforms infrastructure into code.s The Chef server acts as a hub for configuration data.
The Chef server stores cookbooks, the policies that are applied to nodes, and metadata that describes each registered node that is being managed by the chef-client.

3) What is chef DK?

A) The Chef DK workstation is the location where users interact with Chef. On the workstation users author and test cookbooks using tools such as Test Kitchen and interact with the Chef server using the knife and chef command line tools.

4) What are chef client nodes?

A) Chef client nodes are the machines that are managed by Chef. The Chef client is installed on each node and is used to configure the node to its desired state.

5) What is a chef server?

A) The Chef server acts as a hub for configuration data. The Chef server stores cookbooks, the policies that are applied to nodes, and metadata that describes each registered node that is being managed by Chef. Nodes use the Chef client to ask the Chef server for configuration details, such as recipes, templates, and file distributions.

6) What are work stations in chef?

A) A workstation is a computer running the Chef Development Kit (ChefDK) that is used to author cookbooks, interact with the Chef server, and interact with nodes.

The workstation is the location from which most users do most of their work, including:

Developing and testing cookbooks and recipes
Testing Chef code
Keeping the chef-repo synchronized with version source control
Configuring organizational policy, including defining roles and environments, and ensuring that critical data is stored in data bags
Interacting with nodes, as (or when) required, such as performing a bootstrap operation

7) What are Cookbooks in chef?

A) A cookbook is the fundamental unit of configuration and policy distribution. A cookbook defines a scenario and contains everything that is required to support that scenario:

Recipes that specify the resources to use and the order in which they are to be applied
Attribute values
File distributions
Templates
Extensions to Chef, such as custom resources and libraries

8) What is chef repo?

A) The chef-repo is a directory on your workstation that stores:

Cookbooks (including recipes, attributes, custom resources, libraries, and templates)
Roles
Data bags
Environments
The chef-repo directory should be synchronized with a version control system, such as git. All of the data in the chef-repo should be treated like source code.

9) What is chef-client Run?

A) A “chef-client run” is the term used to describe a series of steps that are taken by the chef-client when it is configuring a node.

10) What is chef validator?

A) chef-validator – Every request made by the chef-client to the Chef server must be an authenticated request using the Chef server API and a private key. When the chef-client makes a request to the Chef server, the chef-client authenticates each request using a private key located in /etc/chef/client.pem.

Chef Interview Questions Devops

Chef Interview Questions # 11) Why do we use SSL Certificates in chef?

A) An SSL certificate is used between the chef-client and the Chef server to ensure that each node has access to the right data.

Chef Interview Questions # 12) What are Signed Headers in chef?

A) Signed header authentication is used to validate communications between the Chef server and any node that is being managed by the Chef server.

Chef Interview Questions # 13) What is SSL_CERT_FILE in chef?

A) Use the SSL_CERT_FILE environment variable to specify the location for the SSL certificate authority (CA) bundle that is used by the chef-client.

Chef Interview Questions # 14) What are Knife Subcommands in chef?

A) The chef-client includes two knife commands for managing SSL certificates:

Use knife ssl check to troubleshoot SSL certificate issues
Use knife ssl fetch to pull down a certificate from the Chef server to the /.chef/trusted_certs directory on the workstation.

Chef Interview Questions # 15) What is knife ssl check command in chef?

A) Run the knife ssl check subcommand to verify the state of the SSL certificate, and then use the reponse to help troubleshoot issues that may be present.

Chef Interview Questions # 16) What is knife ssl fetch command in chef?

A) Run the knife ssl fetch to download the self-signed certificate from the Chef server to the /.chef/trusted_certs directory on a workstation.

Chef Interview Questions # 17) What are Data Bags?

A) A data bag is a global variable that is stored as JSON data and is accessible from a Chef server. A data bag is indexed for searching and can be loaded by a recipe or accessed during a search.

Chef Interview Questions # 18) What are recipes in chef?

A) A recipe is the most fundamental configuration element within the organization. A recipe:

Is authored using Ruby, which is a programming language designed to read and behave in a predictable manner
Is mostly a collection of resources, defined using patterns (resource names, attribute-value pairs, and actions); helper code is added around this using Ruby, when needed
Must define everything that is required to configure part of a system
Must be stored in a cookbook
May be included in a recipe
May use the results of a search query and read the contents of a data bag (including an encrypted data bag)
May have a dependency on one (or more) recipes
May tag a node to facilitate the creation of arbitrary groupings
Must be added to a run-list before it can be used by the chef-client
Is always executed in the same order as listed in a run-list

Chef Interview Questions # 19) What is chef resources file?

A) A file resource is used to manage files directly on a node.

A file resource block manages files that exist on nodes. For example, to write the home page for an Apache website:

file ‘/var/www/customers/public_html/index.php’ do
content ‘<html>This is a placeholder for the home page.</html>’
mode ‘0755’
owner ‘web_admin’
group ‘web_admin’
end

Chef Interview Questions # 20) What is apt_package resource in chef?

Answer) Use the apt_package resource to manage packages on Debian and Ubuntu platforms.

apt_package Syntax:

A apt_package resource block manages a package on a node, typically by installing it. The simplest use of the apt_package resource is:

apt_package ‘package_name’

Chef Interview Questions Magazine

Chef Interview Questions # 21) What is apt_preference resource in chef?

A) The apt_preference resource allows for the creation of APT preference files. Preference files are used to control which package versions and sources are prioritized during installation. New in Chef Client 13.3

Syntax:

apt_preference ‘package_name’ do
action :add
end

Chef Interview Questions # 22) What is apt_repository resource?

A) Use the apt_repository resource to specify additional APT repositories. Adding a new repository will update APT package cache immediately.

apt_repository ‘nginx’ do
uri ‘http://nginx.org/packages/ubuntu/’
components [‘nginx’]
end

Chef Interview Questions # 23) What is apt_update resource in chef?

A) Use the apt_update resource to manage APT repository updates on Debian and Ubuntu platforms.

Chef Interview Questions # 24) what is bff_package resource in chef?

A) Use the bff_package resource to manage packages for the AIX platform using the installp utility. When a package is installed from a local file, it must be added to the node using the remote_file or cookbook_file resources.

Chef Interview Questions # 25) What is cab_package resource in chef?

A) Use the cab_package resource to install or remove Microsoft Windows cabinet (.cab) packages.

Chef Interview Questions # 26) What is chef_gem?

A) Use the chef_gem resource to install a gem only for the instance of Ruby that is dedicated to the chef-client. When a gem is installed from a local file, it must be added to the node using the remote_file or cookbook_file resources.

Chef Interview Questions # 27) What is chef_acl resource in chef?

A) Use the chef_acl resource to interact with access control lists (ACLs) that exist on the Chef server.

Syntax: The syntax for using the chef_acl resource in a recipe is as follows:

chef_acl ‘name’ do
attribute ‘value’ # see properties section below

action :action # see actions section below
end

Chef Interview Questions # 28) What is chef_client resource?

A) A chef-client is an agent that runs locally on every node that is under management by Chef. When a chef-client is run, it will perform all of the steps that are required to bring the node into the expected state, including:

Registering and authenticating the node with the Chef server
Building the node object
Synchronizing cookbooks
Compiling the resource collection by loading each of the required cookbooks, including recipes, attributes, and all other dependencies
Taking the appropriate and required actions to configure the node
Looking for exceptions and notifications, handling each as required

Chef Interview Questions # 29) What is chef_container resource?

A) chef_container resource is used to interact with container objects that exist on the Chef server.

Chef Interview Questions # 30) What is chef_data_bag_item?

A) A data bag is a container of related data bag items, where each individual data bag item is a JSON file. knife can load a data bag item by specifying the name of the data bag to which the item belongs and then the filename of the data bag item.

Use the chef_data_bag_item resource to manage data bag items.

Syntax – The syntax for using the chef_data_bag_item resource in a recipe is as follows:

chef_data_bag_item ‘name’ do
attribute ‘value’

action :action
end

Chef Tool Interview Questions

Chef Interview Questions # 31) What is chef_data_bag resource?

A) A data bag is a global variable that is stored as JSON data and is accessible from a Chef server. A data bag is indexed for searching and can be loaded by a recipe or accessed during a search.

Use the chef_data_bag resource to manage data bags.

Chef Interview Questions # 32) What is chef_environment resource?

A) chef_environment resource to manage environments. An environment is a way to map an organization’s real-life workflow to what can be configured and managed when using Chef server. Every organization begins with a single environment called the _default environment, which cannot be modified (or deleted). Additional environments can be created to reflect each organization’s patterns and workflow.

Chef Interview Questions # 33) What is chef_group resource?

A) chef_group resource is used to interact with group objects that exist on the Chef server.

Chef Interview Questions # 34) What is chef_handler resource?

A) The chef_handler resource is used to enable handlers during a chef-client run. The resource allows arguments to be passed to the chef-client, which then applies the conditions defined by the custom handler to the node attribute data collected during the chef-client run, and then processes the handler based on that data.

Chef Interview Questions # 35) What is the chef_mirror resource?

A) The chef_mirror resource to mirror objects in the chef-repo to a specified location.

Chef Interview Questions # 36) What is chef_node resource?

A) A node is any machine—physical, virtual, cloud, network device, etc.—that is under management by Chef. chef_node resource is used to manage nodes.

Chef Interview Questions # 37) What is chef_organization resource?

A) The chef_organization resource to interact with organization objects that exist on the Chef server.

Chef Interview Questions # 38) What is chef_role resource?

A) The chef_role resource to manage roles. A role is a way to define certain patterns and processes that exist across nodes in an organization as belonging to a single job function. Each role consists of zero (or more) attributes and a run-list. Each node can have zero (or more) roles assigned to it.

Chef Interview Questions # 39) What is chef_user resource?

A) The chef_user resource is used to manage users.

Chef Interview Questions # 40) What is chocolatey_package resource?

A) A chocolatey_package resource manages packages using Chocolatey on the Microsoft Windows platform. The simplest use of the chocolatey_package resource is:

chocolatey_package ‘package_name’

Chef Tool Interview Questions And Answers

41) What is cookbook_file resource?

A) The cookbook_file resource to transfer files from a sub-directory of COOKBOOK_NAME/files/ to a specified path located on a host that is running the chef-client.

Syntax – A cookbook_file resource block manages files by using files that exist within a cookbook’s /files directory. For example, to write the home page for an Apache website:

cookbook_file ‘/var/www/customers/public_html/index.php’ do
source ‘index.php’
owner ‘web_admin’
group ‘web_admin’
mode ‘0755’
action :create
end

42) What is cron resource?

A) The cron resource is used to manage cron entries for time-based job scheduling.

43) What is dnf_package resource?

A) the dnf_package resource to install, upgrade, and remove packages with DNF for Fedora platforms. The dnf_package resource is able to resolve provides data for packages much like DNF can do when it is run from the command line. This allows a variety of options for installing packages, like minimum versions, virtual provides, and library names.

44) What is dpkg_package resource?

A) The dpkg_package resource to manage packages for the dpkg platform. When a package is installed from a local file, it must be added to the node using the remote_file or cookbook_file resources.

45) What is metadata.rb in chef?

A) Every cookbook requires a small amount of metadata. A file named metadata.rb is located at the top of every cookbook directory structure. The contents of the metadata.rb file provides hints to the Chef server to help ensure that cookbooks are deployed to each node correctly.

46) What information stored in metadata.rb file?

A) A metadata.rb file is:

Located at the top level of a cookbook’s directory structure.
Compiled whenever a cookbook is uploaded to the Chef server or when the knife cookbook metadata subcommand is run, and then stored as JSON data.
Created automatically by knife whenever the knife cookbook create subcommand is run.
Edited using a text editor, and then re-uploaded to the Chef server as part of a cookbook upload.

47) What is Berkshelf in chef?

A) Berkshelf is a dependency manager for Chef cookbooks. With it, you can easily depend on community cookbooks and have them safely included in your workflow.

48) What is Berksfile in chef?

A) A Berksfile describes the set of sources and dependencies needed to use a cookbook. It is used in conjunction with the berks command.

49) What is Cookbook Keyword in chef?

A) The cookbook keyword allows the user to define where a cookbook is installed from, or to set additional version constraints. It can also be used to install additional cookbooks, for example to use during testing.

50) What is kitchen (executable) in chef?

A) kitchen is the command-line tool for Kitchen, an integration testing tool used by the chef-client. Kitchen runs tests against any combination of platforms using any combination of test suites.

Chef Interview Questions And Answers PDF

51) What is kitchen converge in chef?

A) Use the converge subcommand to converge one (or more) instances. Instances are based on the list of platforms in the .kitchen.yml file. This process will install the chef-client on an instance using the omnibus installer, upload cookbook files and minimal configuration to the instance, and then start a chef-client run using the run-list and attributes specified in the .kitchen.yml file.

Syntax – $ kitchen converge PLATFORMS (options)

52) What is kitchen create in chef?

A) Use the create subcommand to create one (or more) instances. Instances are based on the list of platforms and suites in the .kitchen.yml file.

Syntax – This subcommand has the following syntax:

$ kitchen create PLATFORMS (options)

53) What is kitchen destroy in chef?

A) Use the destroy subcommand to delete one (or more) instances. Instances are based on the list of platforms and suites in the .kitchen.yml file.

Syntax – This subcommand has the following syntax:

$ kitchen destroy PLATFORMS (options)

54) What is kitchen diagnose in chef?

A) Use the diagnose subcommand to show a computed diagnostic configuration for one (or more) instances. This subcommand will make all implicit configuration settings explicit because it echoes back all of the configuration data as YAML.

Syntax – This subcommand has the following syntax:

$ kitchen diagnose PLATFORMS (options)

55) What is kitchen driver create in chef?

A) Use the driver create subcommand to create a new Kitchen driver in the RubyGems project.

Syntax – This subcommand has the following syntax:

$ kitchen driver create NAME

56) What is kitchen driver discover?

A) Use the driver discover subcommand to discover Kitchen driver that have been published to RubyGems. This subcommand will return all RubyGems that are match kitchen-*.

Syntax – This subcommand has the following syntax:

$ kitchen driver discover

57) What kitchen exec in chef?

A) Use the exec subcommand to execute a command on a remote instance.

Syntax – This subcommand has the following syntax:

$ kitchen exec PLATFORMS (options)

58) What is kitchen init command in chef?

A) Use the init subcommand to create an initial Kitchen environment, including:

Creating a .kitchen.yml file
Appending Kitchen to the RubyGems file, .gitignore, and .thor
Creating the test/integration/default directory

Syntax – This subcommand has the following syntax:

$ kitchen init

59) What is kitchen list in chef?

A) Use the list subcommand to view the list of instances. Instances are based on the list of platforms in the .kitchen.yml file. Kitchen will auto-name instances by combining a suite name with a platform name. For example, if a suite is named default and a platform is named ubuntu-10.04, then the instance would be default-ubuntu-10.04. This ensures that Kitchen instances have safe DNS and hostname records.

Syntax – This subcommand has the following syntax:

$ kitchen list PLATFORMS (options)

60) What is kitchen login command in chef?

A) Use the login subcommand to log in to a single instance. Instances are based on the list of platforms and suites in the .kitchen.yml file. After logging in successfully, the instance can be interacted with just like any other virtual machine, including adding or removing packages, starting or stopping services, and so on. It’s a sandbox. Make any change necessary to help improve the coverage for cookbook testing.

Syntax – This subcommand has the following syntax:

$ kitchen login PLATFORM (options)

Chef Interview Questions And Answers For Experienced

61) What is kitchen setup c0mmand in chef?

A) Use the setup subcommand to set up one (or more) instances. Instances are based on the list of platforms in the .kitchen.yml file.

Syntax – This subcommand has the following syntax:

$ kitchen setup PLATFORMS (options)

62) What is kitchen test command in chef?

A) Use the test subcommand to test one (or more) verified instances. Instances are based on the list of platforms and suites in the .kitchen.yml file. This subcommand will create a new instance (cleaning up a previous instance, if necessary), converge that instance, set up the test harness, verify the instance using that test harness, and then destroy the instance.

In general, use the test subcommand to verify the end-to-end quality of a cookbook. Use the converge and verify subcommands during the normal the day-to-day development of a cookbook.

Syntax – This subcommand has the following syntax:

$ kitchen test PLATFORMS (options)

63) What is kitchen verify command in chef?

A) Use the verify subcommand to verify one (or more) instances. Instances are based on the list of platforms and suites in the .kitchen.yml file.

In general, use the test subcommand to verify the end-to-end quality of a cookbook. Use the converge and verify subcommands during the normal the day-to-day development of a cookbook.

Syntax – This subcommand has the following syntax:

$ kitchen verify PLATFORMS (options)

64) What is kitchen version command in chef?

A) Use the version subcommand to print the version of Kitchen.

Syntax – This subcommand has the following syntax:

$ kitchen version

65) What are handlers in chef?

A) Handlers are used to identify situations that arise during a chef-client run, and then tell the chef-client how to handle these situations when they occur.

66) How many types of handlers are there in chef? What are they?

A) In chef there are three types of handlers are there they are:
Exception Handler
Report Handler
Start Handler

67) What is exception handler in chef?

A) An exception handler is used to identify situations that have caused a chef-client run to fail. An exception handler can be loaded at the start of a chef-client run by adding a recipe that contains the chef_handler resource to a node’s run-list. An exception handler runs when the failed? property for the run_status object returns true.

68) What is a report handler in chef?

A) A report handler is used when a chef-client run succeeds and reports back on certain details about that chef-client run. A report handler can be loaded at the start of a chef-client run by adding a recipe that contains the chef_handler resource to a node’s run-list. A report handler runs when the success? property for the run_status object returns true.

69) What is start handler in chef?

A) A start handler is used to run events at the beginning of the chef-client run. A start handler can be loaded at the start of a chef-client run by adding the start handler to the start_handlers setting in the client.rb file or by installing the gem that contains the start handler by using the chef_gem resource in a recipe in the chef-client cookbook.

70) What is Handler DSL in chef?

A) Use the Handler DSL to attach a callback to an event. If the event occurs during the chef-client run, the associated callback is executed. For example:

Sending email if a chef-client run fails
Sending a notification to chat application if an audit run fails
Aggregating statistics about resources updated during a chef-client runs to StatsD

Chef Devops Interview Questions

71) What is Knife and what is the purpose of using Knife in chef?

A) Knife is a command-line tool that provides an interface between a local chef-repo and the Chef server. knife helps users to manage:

Nodes
Cookbooks and recipes
Roles, Environments, and Data Bags
Resources within various cloud environments
The installation of the chef-client onto nodes
Searching of indexed data on the Chef server

72) What are the different Knife plugins for cloud hosting platforms?

A) There are different knife plugins available for cloud hosting platforms:
knife azure, knife bluebox, knife ec2, knife eucalyptus, knife google, knife linode, knife openstack, and knife rackspace

73) What is Ohai in chef?

A) Ohai is a tool that is used to collect system configuration data, which is provided to the chef-client for use within cookbooks. Ohai is run by the chef-client at the beginning of every Chef run to determine system state. Ohai includes many built-in plugins to detect common configuration details as well as a plugin model for writing custom plugins.

74) Why do we use chef-jenkins plugin in chef?

A) chef-jenkins adds the ability to use Jenkins to drive continuous deployment and synchronization of environments from a git repository.

75) Why do we use jclouds-chef plugin in chef?

A) jclouds-chef plugin adds Java and Clojure components to the Chef server API REST API.

76) Why do we use chef-hatch-repo in chef?

A) chef-hatch-repo plugin adds a knife plugin and a Vagrant provisioner that can launch a self-managed Chef server in a virtual machine or Amazon EC2.

Real-Time Chef Interview Questions

77) Why do we use chef-trac-hacks in chef?

A) chef-trac-hacks adds the ability to fill a coordination gap between Amazon Web Services (AWS) and the chef-client.

78) What is chef-deploy plugin in chef and what is the purpose of using it?

A) chef-deploy adds a gem that contains resources and providers for deploying Ruby web applications from recipes.

79) What is kitchenplan in chef?

A) Kitchenplan is a utility for automating the installation and configuration of a workstation on macOS.

80) What is stove in chef?

A) Stove is a utility for releasing and managing cookbooks.

81) What are the benefits of Devops?

A) There are many benefits of using devops, explain about your devops experience.

Technical benefits:

Continuous software delivery
Less complex problems to fix
Faster resolution of problems
Business benefits:

Faster delivery of features
More stable operating environments
More time available to add value (rather than fix/maintain)

82) What is Vagrant in chef?

A) Vagrant helps Test Kitchen communicate with VirtualBox and configures things like available memory and network settings.

RELATED INTERVIEW QUESTIONS

  1. Puppet Interview Questions
  2. DB2 Interview Questions
  3. AnthillPro Interview Questions
  4. Angular 2 Interview Questions
  5. Hibernate Interview Questions
  6. ASP.NET Interview Questions
  7. PHP Interview Questions
  8. Kubernetes Interview Questions
  9. Docker Interview Questions
  10. CEH Interview Questions
  11. CyberArk Interview Questions
  12. Appian Interview Questions
  13. Drools Interview Questions
  14. Talend Interview Questions
  15. Selenium Interview Questions
  16. Ab Initio Interview Questions
  17. AB Testing Interview Questions
  18. Mobile Application Testing Interview Questions
  19. Pega Interview Questions
  20. UI Developer Interview Questions
  21. Tableau Interview Questions
  22. SAP ABAP Interview Questions
  23. Reactjs Interview Questions
  24. UiPath Interview Questions
  25. Automation Anywhere Interview Questions
  26. RPA Interview Questions
  27. RPA Blue Prism Interview Questions
  28. Ranorex Interview Questions
  29. AWS Interview Questions
  30. SSRS Interview Questions

Advanced Puppet Interview Questions And Answers

Top 57 Advanced Puppet Interview Questions And Answers For Experienced 2018. Here coding compiler listed frequently asked devops puppet interview questions. These 57 puppet real time interview questions are prepared by the industry experts, so this list will help you to crack your next puppet devops job interview. All the best for your future and happy learning.

Puppet Interview Questions

  1. What is Puppet?
  2. What is chef and puppet used for?
  3. How Puppet works?
  4. What are Resources in Puppet?
  5. What are Resource types in Puppet?
  6. How can you add new resource types to Puppet?
  7. What is a Class in Puppet?
  8. What is Node definition in Puppet?
  9. What are facts in Puppet?
  10. Puppet can access which facts?
  11. What are function in Puppet?
  12. What is a class in Puppet, explain with example?
  13. How can you configure systems with Puppet?
  14. What are Catalogs in Puppet?
  15. Explain the agent/master architecture in Puppet?
  16. How can Puppet agent nodes and Puppet masters communicate with each other?
  17. Can you explain stand-alone architecture in Puppet?
  18. How can you explain installing Puppet agent in Linux?
  19. What is Puppet codedir?
  20. Where do you find codedir in Puppet?
  21. In Puppet where codedir is configured?
  22. What is a main manifest or site manifest in Puppet?
  23. What is Puppet apply?
  24. What is modulepath in Puppet?
  25. What is base modulepath?

Puppet Interview Questions And Answers

Puppet Question # 1) What is Puppet?

Answer # Puppet is an open-source software configuration management tool. It runs on many Unix-like systems as well as on Microsoft Windows, and includes its own declarative language to describe system configuration.

Puppet Question # 2) What is chef and puppet used for?

Answer # Puppet and Chef are the major configuration management systems on Linux, along with CFEngine, Ansible. More than a configuration management tool, Chef, along with Puppet and Ansible, is one of the industry’s most notable Infrastructure as Code (IAC) tools.

Puppet Question # 3) How Puppet works?

Answer # It works like this. Puppet agent is a daemon that runs on all the client servers(the servers where you require some configuration, or the servers which are going to be managed using puppet.) All the clients which are to be managed will have puppet agent installed on them, and are called nodes in puppet.

Puppet Question # 4) What are Resources in Puppet?

Answer # Resources are the fundamental unit for modeling system configurations. Each resource describes some aspect of a system, like a specific service or package.

A resource declaration is an expression that describes the desired state for a resource and tells Puppet to add it to the catalog. When Puppet applies that catalog to a target system, it manages every resource it contains, ensuring that the actual state matches the desired state.

Puppet Question # 5) What are Resource types in Puppet?

Answers # Every resource is associated with a resource type, which determines the kind of configuration it manages.

Puppet has many built-in resource types, like files, cron jobs, services, etc. See the resource type reference for information about the built-in resource types.

Puppet Question # 6) How can you add new resource types to Puppet?

Answer # Yes we can also add new resource types to Puppet:
Defined types are lightweight resource types written in the Puppet language.

Custom resource types are written in Ruby, and have access to the same capabilities as Puppet’s built-in types.

Puppet Question # 7) What is a Class in Puppet?

Answer # Classes are named blocks of Puppet code that are stored in modules for later use and are not applied until they are invoked by name. They can be added to a node’s catalog by either declaring them in your manifests or assigning them from an ENC.

Classes generally configure large or medium-sized chunks of functionality, such as all of the packages, config files, and services needed to run an application.

Puppet Question # 8) What is Node definition in Puppet?

Answer # A node definition or node statement is a block of Puppet code that will only be included in matching nodes’ catalogs. This feature allows you to assign specific configurations to specific nodes.

Puppet Question # 9) What are facts in Puppet?

Answer # Before requesting a catalog (or compiling one with puppet apply), Puppet will collect system information with Facter. Puppet receives this information as facts, which are pre-set variables you can use anywhere in your manifests.

Puppet Question # 10) Puppet can access which facts?

Answer # Puppet can access the following facts:
Facter’s built-in core facts. Any custom facts or external facts present in your modules.

Top Puppet Interview Questions

Puppet Interview Question # 11) What are functions in Puppet?

Answer # You can write your own functions in the Puppet language to transform data and construct values. A function can optionally take one or more parameters as arguments. A function returns a calculated value from its final expression.

Function Syntax in Puppet: 

function <MODULE NAME>::<NAME>(<PARAMETER LIST>) >> <RETURN TYPE> {  ... body of function ...  final expression, which will be the returned value of the function}
Function Example in Puppet: 

function apache::bool2http(Variant[String, Boolean] $arg) >> String {  case $arg {    false, undef, /(?i:false)/ : { 'Off' }    true, /(?i:true)/          : { 'On' }    default               : { "$arg" }  }}

Puppet Interview Question # 12) What is a class in Puppet, explain with example?

Answer # Classes are named blocks of Puppet code that are stored in modules for later use and are not applied until they are invoked by name. They can be added to a node’s catalog by either declaring them in your manifests or assigning them from an ENC.

Classes generally configure large or medium-sized chunks of functionality, such as all of the packages, config files, and services needed to run an application.

Defining classes in Puppet: 
Defining a class makes it available for later use. It doesn’t yet add any resources to the catalog; to do that, you must declare it or assign it from an ENC.

Class Syntax in Puppet: 

# A class with no parametersclass base::linux {  file { ‘/etc/passwd’:    owner => ‘root’,    group => ‘root’,    mode  => ‘0644’,  }  file { ‘/etc/shadow’:    owner => ‘root’,    group => ‘root’,    mode  => ‘0440’,  }}

# A class with parametersclass apache (String $version = ‘latest’) {  package {‘httpd’:    ensure => $version,

# Using the class parameter from above    before => File[‘/etc/httpd.conf’],  }  file {‘/etc/httpd.conf’:    ensure  => file,    owner   => ‘httpd’,    content => template(‘apache/httpd.conf.erb’),

# Template from a module  }  service {‘httpd’:    ensure    => running,    enable    => true,    subscribe => File[‘/etc/httpd.conf’],  }}

Puppet Interview Question # 13) How can you configure systems with Puppet?

Answer # You can configure systems with Puppet either in a client/server architecture, using the Puppet agent and Puppet master applications, or in a stand-alone architecture, using the Puppet apply application.

Puppet Interview Question # 14) What are Catalogs in Puppet?

Answer # A catalog is a document that describes the desired system state for one specific computer. It lists all of the resources that need to be managed, as well as any dependencies between those resources.
Puppet configures systems in two stages:
Compile a catalog. Apply the catalog.

Puppet Interview Question # 15) Explain the agent/master architecture in Puppet?

Answer # When set up as an agent/master architecture, a Puppet master server controls the configuration information, and each managed agent node requests its own configuration catalog from the master.

In this architecture, managed nodes run the Puppet agent application, usually as a background service. One or more servers run the Puppet master application, Puppet Server.

Puppet Interview Question # 16) How can Puppet agent nodes and Puppet masters communicate with each other?

Answer # Puppet agent nodes and Puppet masters communicate by HTTPS with client verification.

The Puppet master provides an HTTP interface, with various endpoints available. When requesting or submitting anything to the master, the agent makes an HTTPS request to one of those endpoints.

Client-verified HTTPS means each master or agent must have an identifying SSL certificate. They each examine their counterpart’s certificate to decide whether to allow an exchange of information.
Puppet includes a built-in certificate authority for managing certificates.

Agents can automatically request certificates through the master’s HTTP API. You can use the puppet cert command to inspect requests and sign new certificates. And agents can then download the signed certificates.

Puppet Interview Question # 16) Can you explain stand-alone architecture in Puppet?

Answer # Puppet can run in a stand-alone architecture, where each managed node has its own complete copy of your configuration info and compiles its own catalog.

In this architecture, managed nodes run the Puppet apply application, usually as a scheduled task or cron job. You can also run it on demand for initial configuration of a server or for smaller configuration tasks.

Like the Puppet master application, Puppet apply needs access to several sources of configuration data, which it uses to compile a catalog for the node it is managing.

Puppet Interview Question # 17) How can you explain installing Puppet agent in Linux?

Answer # Install the Puppet agent so that your master can communicate with your Linux nodes.

1. Install a release package to enable Puppet Platform repositories.
2. Confirm that you can run Puppet executables.
The location for Puppet’s executables is /opt/puppetlabs/bin/, which is not in your PATH environment variable by default.

The executable path doesn’t matter for Puppet services — for instance, service puppet start works regardless of the PATH — but if you’re running interactive puppet commands, you must either add their location to your PATH or execute them using their full path.

To quickly add the executable location to your PATH for your current terminal session, use the command export PATH=/opt/puppetlabs/bin:$PATH. You can also add this location wherever you configure your PATH, such as your .profile or .bashrc configuration files.

For more information, see details about file and directory locations.

3. Install the puppet-agent package on your Puppet agent nodes using the command appropriate to your system:
Yum – sudo yum install puppet-agentApt – sudo apt-get install puppet-agentZypper – sudo zypper install puppet-agent

4. (Optional) Configure agent settings.
For example, if your master isn’t reachable at the default address, server = puppet, set the server setting to your Puppet master’s hostname.
For other settings you might want to change, see a list of agent-related settings.

5. Start the puppet service: sudo /opt/puppetlabs/bin/puppet resource service puppet ensure=running enable=true.

6. (Optional) To see a sample of Puppet agent’s output and verify any changes you may have made to your configuration settings in step 5, manually launch and watch a Puppet run: sudo /opt/puppetlabs/bin/puppet agent –test

7. Sign certificates on the certificate authority (CA) master.
On the Puppet master:

1. Run sudo /opt/puppetlabs/bin/puppet cert list to see any outstanding requests.

2. Run sudo /opt/puppetlabs/bin/puppet cert sign <NAME> to sign a request.

As each Puppet agent runs for the first time, it submits a certificate signing request (CSR) to the CA Puppet master. You must log into that server to check for and sign certificates. After an agent’s certificate is signed, it regularly fetches and applies configuration catalogs from the Puppet master.

Puppet Interview Question # 18) What is Puppet codedir?

Answer # Puppet’s codedir is the main directory for Puppet code and data. It contains environments (which contain your manifests and modules), a global modules directory for all environments, and your Hiera data.

Puppet Interview Question # 19) Where do you find codedir in Puppet?

Answer # Puppet’s codedir can be found at one of the following locations:
*nix Systems: /etc/puppetlabs/codeWindows: %PROGRAMDATA%\PuppetLabs\code (usually C:\ProgramData\PuppetLabs\code)non-root users: ~/.puppetlabs/etc/code

Puppet Interview Question # 20) In Puppet where codedir is configured?

Answer # The location of the codedir can be configured in puppet.conf with the codedir setting, but note that Puppet Server doesn’t use that setting; it has its own jruby-puppet.master-code-dir setting in puppetserver.conf. If you’re using a non-default codedir, you must change both settings.

Advanced Puppet Interview Questions

Puppet Interview Questions # 21) What is a main manifest or site manifest in Puppet?

Answer # Puppet always starts compiling with either a single manifest file or a directory of manifests that get treated like a single file. This main starting point is called the main manifest or site manifest.

Puppet Interview Questions # 22) What is Puppet apply?

Answer # The puppet apply command requires a manifest as an argument on the command line. (For example: puppet apply /etc/puppetlabs/code/environments/production/manifests/site.pp.) It can be a single file or a directory of files.

The puppet apply command does not automatically use an environment’s manifest. Instead, it always uses the manifest you pass to it.

Puppet Interview Questions # 23) What is modulepath in Puppet?

Answer # The Puppet master service and the puppet apply command both load most of their content from modules. (See the page on module structure and behavior for more details.)

Puppet automatically loads modules from one or more directories. The list of directories Puppet will find modules in is called the modulepath.

Puppet Interview Questions # 24) What is base modulepath?

Answer # The base modulepath is a list of global module directories for use with all environments. It can be configured with the basemodulepath setting, but its default value is probably suitable for you unless you’re doing something unusual.

The default value of the basemodulepath setting is $codedir/modules:/opt/puppetlabs/puppet/modules. (On Windows, it will just use $codedir\modules.)

Puppet Interview Questions # 25) What is SSLdir?

Puppet stores its certificate infrastructure in the ssldir directory. This directory has a similar structure on all Puppet nodes, whether they are agent nodes, Puppet master servers, or the certificate authority (CA) master.

Puppet Labs Interview Questions

Puppet Interview Questions # 26) What the ssldir directory contains in Puppet?

Answer # The ssldir directory contains Puppet certificates, private keys, certificate signing requests (CSRs), and other cryptographic documents.
The ssldir directory on Agent nodes and Puppet masters contain a private key (private_keys/<certname>.pem), a public key (public_keys/<certname.pem>), a signed certificate (certs/<certname>.pem), a copy of the CA certificate (certs/ca.pem), and a copy of the certificate revocation list (CRL) (crl.pem).

They usually also retain a copy of their CSR after submitting it (certificate_requests/<certname>.pem). If these files don’t exist, they are either generated locally or requested from the CA Puppet master.

Puppet Interview Questions # 27) What is cache directory (vardir) in Puppet?

Answer # Puppet’s cache directory, sometimes called vardir, contains dynamic or growing data that Puppet creates in the course of its normal operations. Some of this data can be mined for interesting analysis, or to integrate other tools with Puppet. Other parts are just infrastructure and can be ignored.

Puppet Interview Questions # 28) Where the vardir can be stored in Puppet?

Answer # Location of the vardir directory Puppet Server’s cache directory defaults to /opt/puppetlabs/server/data/puppetserver.

The cache directory for Puppet agent and Puppet apply can be found at one of the following locations:
*nix Systems: /var/opt/puppetlabs/puppet/cachenon-root users: ~/.puppetlabs/opt/puppet/cacheWindows: %PROGRAMDATA%\PuppetLabs\puppet\cache (usually C:\Program Data\PuppetLabs\puppet\cache)

Puppet Interview Questions # 29) What are Environments in Puppet?

Answer # Environments are isolated groups of Puppet agent nodes.
A Puppet master serves each environment with its own main manifest and module path. This lets you use different versions of the same modules for different groups of nodes, which is useful for testing changes to your Puppet code before implementing them on production machines.

Puppet Interview Questions # 30) What are the types of environemts?

Answer # The main uses for environments fall into three categories: permanent test environments, temporary test environments, and divided infrastructure.

Puppet Real Time Interview Questions

Puppet Interview Questions # 31) What are Permanent test environments in Puppet?

Answer # In a permanent test environment, there is a stable group of test nodes where all changes must succeed before they can be merged into the production code. The test nodes are a smaller version of the whole production infrastructure.

Puppet Interview Question # 32) What are Temporary test environments in Puppet?

Answer # In a temporary test environment, you can test a single change or group of changes by checking the changes out of version control into the $codedir/environments directory, where it will be detected as a new environment. A temporary test environment can either have a descriptive name or use the commit ID from the version that it is based on.

Puppet Interview Questions # 33) What is Divided infrastructure in Puppet?

Answer # If parts of your infrastructure are managed by different teams that don’t need to coordinate their code, you can split them into environments.

Puppet Interview Questions # 34) What are modules in Puppet?

Answer # Modules are self-contained bundles of code and data. These reusable, shareable units of Puppet code are a basic building block for Puppet.

Nearly all Puppet manifests belong in modules. The sole exception is the main site.pp manifest, which contains site-wide and node-specific code.

Puppet Interview Questions # 35) What is Module layout in Puppet?

Answer # On disk, a module is a directory tree with a specific, predictable structure:
<MODULE NAME> manifests

  • files
  • templates
  • lib
  • facts
  • examples
  • spec
  • functions
  • types

Puppet Real Time Scenarios

Puppet Interview Question # 36) What are the Types of plug-ins in modules of Puppet?

Answer # Puppet supports several kinds of plug-ins:
Custom facts (written in Ruby).External facts (executable scripts or static data).

Custom resource types and providers (written in Ruby).Custom functions written in Ruby.Custom functions written in the Puppet language.Custom Augeas lenses.Miscellaneous utility Ruby code used by other plug-ins.

Puppet Interview Questions # 37) What is puppet module command?

Answer # The puppet module command provides an interface for managing modules from the Puppet Forge. Its interface is similar to several common package managers (such as gem, apt-get, or yum). You can use the puppet module command to search for, install, and manage modules.

Puppet Interview Questions # 38) Explain the process of installing modules from the command line in Puppet?

Answer # The puppet module install command installs a module and all of its dependencies.

By default, it installs into the first directory in Puppet’s modulepath, which defaults to $codedir/environments/production/modules.

For example, to install the puppetlabs-apache module, run:
puppet module install puppetlabs-apache

Puppet Interview Questions # 39) Explain the process of Installing modules from the Puppet Forge?

Answer # To install a module from the Puppet Forge, use the puppet module install command with the full name of the module you want.
The full name of a Forge module is formatted as username-modulename. For example, to instal puppetlabs-apache:
puppet module install puppetlabs-apache

Puppet Interview Questions # 40) How to check the installed modules in Puppet?

Answer # Use the puppet module list command to see which modules you have installed and which directory they’re installed in.

To view the modules arranged by dependency instead of location on disk, use the –tree option.

Puppet Master Interview Questions

Puppet Interview Questions # 41) How to uninstalling modules in Puppet?

Answer # Use the puppet module uninstall command to remove an installed module.

Puppet Interview Questions # 42) What are the core commands of Puppet?

Answer # Core commands of Puppet are:

  • Pupper Agent
  •  Pupper Server
  • Puppet Apply
  • Puppet Cert
  • Puppet Module
  • Puppet Resource
  • Puppet Config
  • Puppet Parser
  • Puppet Help
  • Puppet Man

Puppet Interview Questions # 43) What is Puppet agent?

Answer # Puppet agent manages systems, with the help of a Puppet master. It requests a configuration catalog from a Puppet master server, then ensures that all resources in that catalog are in their desired state.

Puppet Interview Questions # 44) What is Puppet Server?

Answer # Puppet Server compiles configurations for any number of Puppet agents, using Puppet code and various other data sources. It provides the same services as the classic Puppet master application, and more.

Puppet Interview Questions # 45) What is Puppet apply?

Answer # Puppet apply manages systems without needing to contact a Puppet master server. It compiles its own configuration catalog, using Puppet modules and various other data sources, then immediately applies the catalog.

Puppet Interview Questions # 46) What is Puppet cert?

Answer # Puppet cert helps manage Puppet’s built-in certificate authority (CA). It runs on the same server as the Puppet master application. You can use it to sign and revoke agent certificates.

Puppet Interview Questions # 47) What is Puppet module?

Answer # Puppet module is a multi-purpose tool for working with Puppet modules. It can install and upgrade new modules from the Puppet Forge, help generate new modules, and package modules for public release.

Puppet Interview Questions # 48) What is Puppet resource?

Answer # Puppet resource lets you interactively inspect and manipulate resources on a system. It can work with any resource type Puppet knows about.

Puppet Interview Questions # 49) What is Puppet config?

Answer # Puppet config lets you view and change Puppet’s settings.

Puppet Interview Questions # 50) What is Puppet parser?

Answer # Puppet parser lets you validate Puppet code to make sure it contains no syntax errors. It can be a useful part of your continuous integration toolchain.

Puppet Command Cheat Sheet

Puppet Interview Questions # 51) What are Puppet help and Puppet man?

Answer # Puppet help and Puppet man can display online help for Puppet’s other subcommands.

Puppet Interview Questions # 52) What are the sub commands in Puppet?

Answer # Puppet’s command line tools consist of a single puppet binary with many subcommands. The following subcommands are available in this version of Puppet:

Core Tools: These subcommands form the core of Puppet’s tool set, and every user should understand what they do.

  1. puppet agent
  2. puppet apply
  3. puppet cert
  4. puppet master
  5. puppet module
  6. puppet resource
  7. puppet lookup

Occasionally Useful Subcommands

Many or most users will need to use these subcommands at some point, but they aren’t needed for daily use the way the core tools are.

  • puppet config
  • puppet describe
  • puppet device
  • puppet doc
  • puppet epp
  • puppet help
  • puppet man
  • puppet node
  • puppet parser
  • puppet plugin

Niche Subcommands: Most users can ignore these subcommands. They’re only useful for certain niche workflows, and most of them are interfaces to Puppet’s internal subsystems.

  1. puppet ca
  2. puppet catalog
  3. puppet certificate
  4. puppet certificate_request
  5. puppet certificate_revocation_list
  6. puppet facts
  7. puppet filebucket
  8. puppet key
  9. puppet report
  10. puppet status

Unknown or New Subcommands: These subcommands have not yet been added to any of the categories above.

  • puppet generate

Puppet Interview Questions # 53) Explain Puppet server?

Answer # Puppet Server is an application that runs on the Java Virtual Machine (JVM) and provides the same services as the classic Puppet master application. It mostly does this by running the existing Puppet master code in several JRuby interpreters, but it replaces some parts of the classic application with new services written in Clojure.

Puppet Interview Questions # 54) What is Hiera?

Answer # Hiera is a key/value lookup used for separating data from Puppet code. Hiera is Puppet’s built-in key-value configuration data lookup system.

Puppet’s strength is in reusable code. Code that serves many needs must be configurable: put site-specific information in external configuration data files, rather than in the code itself.

Puppet Interview Questions # 55) Why Puppet uses Hiera?

Answer # Puppet uses Hiera to do two things:
Store the configuration data in key-value pairsLook up what data a particular module needs for a given node during catalog compilation.This is done via:

Automatic Parameter Lookup for classes included in the catalog

Explicit lookup calls

Puppet Interview Questions # 56) What is PSON in Puppet?

Answer # PSON is a variant of JSON that puppet uses for serializing data to transmit across the network or store on disk. Whereas JSON requires that the serialized form is valid unicode (usually UTF-8), PSON is 8-bit ASCII, which allows it to represent arbitrary byte sequences in strings.

Puppet uses the MIME types “pson” and “text/pson” to refer to PSON.

Puppet Interview Questions # 57) How PSON is different from JSON?

Answer # PSON does not differ from JSON in its representation of objects, arrays, numbers, booleans, and null values. PSON does serialize strings differently from JSON.A PSON string is a sequence of 8-bit ASCII encoded data. It must start and end with “ (ASCII 0x22) characters.

RELATED INTERVIEW QUESTIONS

  1. DB2 Interview Questions
  2. AnthillPro Interview Questions
  3. Angular 2 Interview Questions
  4. Hibernate Interview Questions
  5. ASP.NET Interview Questions
  6. PHP Interview Questions
  7. Kubernetes Interview Questions
  8. Docker Interview Questions
  9. CEH Interview Questions
  10. CyberArk Interview Questions
  11. Appian Interview Questions
  12. Drools Interview Questions
  13. Talend Interview Questions
  14. Selenium Interview Questions
  15. Ab Initio Interview Questions
  16. AB Testing Interview Questions
  17. Mobile Application Testing Interview Questions
  18. Pega Interview Questions
  19. UI Developer Interview Questions
  20. Tableau Interview Questions
  21. SAP ABAP Interview Questions
  22. Reactjs Interview Questions
  23. UiPath Interview Questions
  24. Automation Anywhere Interview Questions
  25. RPA Interview Questions
  26. RPA Blue Prism Interview Questions
  27. Ranorex Interview Questions
  28. AWS Interview Questions
  29. SSRS Interview Questions
  30. SQL Interview Questions

Kubernetes Interview Questions And Answers 2018

Kubernetes Interview Questions And Answers 2018. If you are looking for Docker Kubernetes Devops Interview Questions, here in this article Coding compiler sharing 31 interview questions on Kubernetes. These Kubernetes questions will help you to crack your next Kubernetes job interview. All the best for your future and happy learning.

Kubernetes Interview Questions

  1. What is the Kubernetes?
  2. What is Kubernetes and how to use it?
  3. What is the meaning of Kubernetes?
  4. What is a docker?
  5. What is orchestration in software?
  6. What is a cluster in Kubernetes?
  7. What is a swarm in Docker?
  8. What is Openshift?
  9. What is a namespace in Kubernetes?
  10. What is a node in Kubernetes?
  11. What is Docker and what does it do?
  12. What is a Heapster?
  13. Why do we use Docker?
  14. What is a docker in cloud?
  15. What is the Kubelet?
  16. What is Minikube?
  17. What is Kubectl?
  18. What is the Gke?
  19. What is k8s?
  20. What is KUBE proxy?

Kubernetes Interview Questions And Answers

Kubernetes Interview Questions
Kubernetes is anOpen source software
Kubernetes is aSystem for automating deployment, scaling and management of containerized applications
Kubernetes wasOriginally designed by Google and now maintained by the Cloud Native Computing Foundation.
Kubernetes canIt aims to provide a platform for automating deployment, scaling, and operations of application containers.
Kubernetes developed byGoogle
Kubernetes LicenseApache License 2.0
Kubernetes has written inGo Programming
 
Kubernetes Interview Question # 1) What is the Kubernetes?

A) Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery. 

 

Kubernetes Interview Question # 2) What is Kubernetes and how to use it?

A) Kubernetes is an open-source platform designed to automate deploying, scaling, and operating application containers. With Kubernetes, you are able to quickly and efficiently respond to customer demand: Deploy your applications quickly and predictably.

 

Kubernetes Interview Question # 3) What is the meaning of Kubernetes?

A) Kubernetes (commonly referred to as “K8s”) is an open-source system for automating deployment, scaling and management of containerized applications that was originally designed by Google and donated to the Cloud Native Computing Foundation.

Docker Kubernetes Interview Questions For Experienced

Kubernetes Interview Question # 4) What is a docker?

A) Docker container is an open source software development platform. Its main benefit is to package applications in “containers,” allowing them to be portable among any system running the Linux operating system (OS).

 

Kubernetes Interview Question # 5) What is orchestration in software?

A) Application Orchestration. Application or service orchestration is the process of integrating two or more applications and/or services together to automate a process, or synchronize data in real-time. Often, point-to-point integration may be used as the path of least resistance.

 

Kubernetes Questions # 6) What is a cluster in Kubernetes?

A) These master and node machines run the Kubernetes cluster orchestration system. A container cluster is the foundation of Container Engine: the Kubernetesobjects that represent your containerized applications all run on top of a cluster.

 

Interview Questions on Kubernetes # 7) What is a swarm in Docker?

A) Docker Swarm is a clustering and scheduling tool for Docker containers. With Swarm, IT administrators and developers can establish and manage a cluster ofDocker nodes as a single virtual system.

 

Kubernetes Openshift Interview Question # 8) What is Openshift?

A) OpenShift Online is Red Hat’s public cloud application development and hosting platform that automates the provisioning, management and scaling of applications so that you can focus on writing the code for your business, startup, or big idea.

Advanced Kubernetes Interview Questions

Docker and Kubernetes Interview Question # 9) What is a namespace in Kubernetes?

A) Namespaces are intended for use in environments with many users spread across multiple teams, or projects. Namespaces are a way to divide cluster resources between multiple uses (via resource quota). In future versions of Kubernetes, objects in the same namespace will have the same access control policies by default.

 

Kubernetes Interview Question # 10) What is a node in Kubernetes?

A) A node is a worker machine in Kubernetes, previously known as a minion. A nodemay be a VM or physical machine, depending on the cluster. Each node has the services necessary to run pods and is managed by the master components. The services on a node include Docker, kubelet and kube-proxy.

 

Kubernetes Interview Question # 11) What is Docker and what does it do?

A) Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and ship it all out as one package.

 

Kubernetes Interview Question # 12) What is a Heapster?

A) Heapster is a cluster-wide aggregator of monitoring and event data. It supports Kubernetes natively and works on all Kubernetes setups, including our Deis Workflow setup.

 

Kubernetes Interview Question # 13) Why do we use Docker?

A) Docker provides this same capability without the overhead of a virtual machine. It lets you put your environment and configuration into code and deploy it. The same Docker configuration can also be used in a variety of environments. This decouples infrastructure requirements from the application environment.

 

Kubernetes Interview Question # 14) What is a docker in cloud?

A) A node is an individual Linux host used to deploy and run your applications. Docker Cloud does not provide hosting services, so all of your applications, services, and containers run on your own hosts. Your hosts can come from several different sources, including physical servers, virtual machines or cloud providers.

 

Kubernetes Interview Question # 15) What is a cluster of containers?

A) A container cluster is a set of Compute Engine instances called nodes. It also creates routes for the nodes, so that containers running on the nodes can communicate with each other. The Kubernetes API server does not run on your cluster nodes. Instead, Container Engine hosts the API server.

Real-Time Kubernetes Scenario Based Interview Questions

Kubernetes Interview Questions # 16) What is the Kubelet?

A) Kubelets run pods. The unit of execution that Kubernetes works with is the pod. A pod is a collection of containers that share some resources: they have a single IP, and can share volumes.

 

Kubernetes Interview Questions # 17) What is Minikube?

A) Minikube is a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day.

 

Kubernetes Interview Questions # 18) What is Kubectl?

A) kubectl is a command line interface for running commands against Kubernetes clusters. This overview covers kubectl syntax, describes the command operations, and provides common examples. For details about each command, including all the supported flags and subcommands, see the kubectl reference documentation.

 

Kubernetes Interview Questions # 19) What is the Gke?

A) Google Container Engine (GKE) is a management and orchestration system for Docker container and container clusters that run within Google’s public cloud services. Google Container Engine is based on Kubernetes, Google’s open source container management system.

 

Kubernetes Interview Questions # 20) What is k8s?

A) Kubernetes, also sometimes called K8S (K – eight characters – S), is an open source orchestration framework for containerized applications that was born from the Google data centers.

 

Kubernetes Interview Questions # 21) What is KUBE proxy?

A) Synopsis. The Kubernetes network proxy runs on each node. Service cluster ips and ports are currently found through Docker-links-compatible environment variables specifying ports opened by the service proxy. There is an optional addon that provides cluster DNS for these cluster IPs.

 

Kubernetes Interview Questions # 22) Which process runs on Kubernetes master node?

A) Kube-apiserver process runs on Kubernetes master node.

 

Kubernetes Interview Questions # 23) Which process runs on Kubernetes non-master node?

A) Kube-proxy process runs on Kubernetes non-master node.

 

Kubernetes Interview Questions # 24) Which process validates and configures data for the api objects like pods, services?

A) kube-apiserver process validates and configures data for the api objects.

 

Kubernetes Interview Questions # 25) What is the use of kube-controller-manager?

A) kube-controller-manager embeds the core control loop which is a non-terminating loop that regulates the state of the system.

 

Kubernetes Interview Questions # 26) Kubernetes objects made up of what?

A) Kubernetes objects are made up of Pod, Service and Volume.

 

Kubernetes Interview Questions # 27) What are Kubernetes controllers?

A) Kubernetes controllers are Replicaset, Deployment controller.

 

Kubernetes Interview Questions # 28) Where Kubernetes cluster data is stored?

A) etcd is responsible for storing Kubernetes cluster data.

 

Kubernetes Interview Questions # 29) What is the role of kube-scheduler?

A) kube-scheduler is responsible for assigning a node to newly created pods.

 

Kubernetes Interview Questions # 30) Which container runtimes supported by Kubernetes?

A) Kubernetes supports docker and rkt container runtimes.

 

Kubernetes Interview Questions # 31) What are the components interact with Kubernetes node interface?

A) Kubectl, Kubelet, and Node Controller components interacts with Kubernetes node interface.

RELATED INTERVIEW QUESTIONS

  1. Docker Interview Questions
  2. CEH Interview Questions
  3. CyberArk Interview Questions
  4. Appian Interview Questions
  5. Drools Interview Questions
  6. Talend Interview Questions
  7. Selenium Interview Questions
  8. Ab Initio Interview Questions
  9. AB Testing Interview Questions
  10. Mobile Application Testing Interview Questions
  11. Pega Interview Questions
  12. UI Developer Interview Questions
  13. Tableau Interview Questions
  14. SAP ABAP Interview Questions
  15. Reactjs Interview Questions
  16. UiPath Interview Questions
  17. Automation Anywhere Interview Questions
  18. RPA Interview Questions
  19. RPA Blue Prism Interview Questions
  20. Ranorex Interview Questions
  21. AWS Interview Questions
  22. SSRS Interview Questions
  23. SQL Interview Questions
  24. Informatica MDM Interview Questions
  25. CyberArk Interview Questions
  26. SAP SD Interview Questions
  27. SAP EWM Interview Questions
  28. Advanced Javascript Interview Questions
  29. Angular 2 Interview Questions
  30. Advanced Java Interview Questions

Ansible Interview Questions And Answers For Experienced

Advanced Ansible Interview Questions and answers for experienced. Here in this post coding compiler presenting list of scenario based ansible devops interview questions and answers. By reading these ansible technical interview questions, you will get the good knowledge to face ansible job interview. Good luck for your future and happy learning.

Ansible Interview Questions

  1. How can I set the PATH or any other environment variable for a task or entire playbook?
  2. How do I handle different machines needing different user accounts or ports to log in with?
  3. How do I get ansible to reuse connections, enable Kerberized SSH, or have Ansible pay attention to my local SSH config file?
  4. How do I configure a jump host to access servers that I have no direct access to?
  5. How do I speed up management inside EC2?
  6. How do I handle python pathing not having a Python 2.X in /usr/bin/python on a remote machine?
  7. What is the best way to make content reusable/redistributable?
  8. Where does the configuration file live and what can I configure in it?
  9. How do I disable cowsay?
  10. How do I see a list of all of the ansible_ variables?
  11. How do I see all the inventory vars defined for my host?
  12. How do I loop over a list of hosts in a group, inside of a template?
  13. How do I access a variable name programmatically?
  14. How do I access a variable of the first host in a group?
  15. How do I copy files recursively onto a target host?
  16. How do I access shell environment variables?
  17. How do I generate crypted passwords for the user module?
  18. Is there a web interface / REST API / etc?
  19. How do I keep secret data in my playbook?
  20. When should I use {{ }}? Also, how to interpolate variables or dynamic variable names
  21. Why don’t you ship in X format?

Ansible Interview Questions And Answers

Ansible Interview Questions # 1) How can I set the PATH or any other environment variable for a task or entire playbook?

A) Setting environment variables can be done with the environment keyword. It can be used at the task or the play level:

environment:
  PATH: "{{ ansible_env.PATH }}:/thingy/bin"
  SOME: value

Note

starting in 2.0.1 the setup task from gather_facts also inherits the environment directive from the play, you might need to use the |default filter to avoid errors if setting this at play level.

Ansible Interview Questions # 2) How do I handle different machines needing different user accounts or ports to log in with?

A) Setting inventory variables in the inventory file is the easiest way.

Note

Ansible 2.0 has deprecated the “ssh” from ansible_ssh_user,  ansible_ssh_host, and ansible_ssh_port to become ansible_useransible_host, and ansible_port.

If you are using a version of Ansible prior to 2.0, you should continue using the older style variables (ansible_ssh_*). These shorter variables are ignored, without warning, in older versions of Ansible.

For instance, suppose these hosts have different usernames and ports:

[webservers]
asdf.example.com  ansible_port=5000   ansible_user=alice
jkl.example.com   ansible_port=5001   ansible_user=bob

You can also dictate the connection type to be used, if you want:

[testcluster]
localhost           ansible_connection=local
/path/to/chroot1    ansible_connection=chroot
foo.example.com     ansible_connection=paramiko

You may also wish to keep these in group variables instead, or file them in a group_vars/<groupname> file. See the rest of the documentation for more information about how to organize variables.

Ansible Interview Questions # How do I get ansible to reuse connections, enable Kerberized SSH, or have Ansible pay attention to my local SSH config file?

A) Switch your default connection type in the configuration file to ‘ssh’, or use ‘-c ssh’ to use Native OpenSSH for connections instead of the python paramiko library. In Ansible 1.2.1 and later, ‘ssh’ will be used by default if OpenSSH is new enough to support ControlPersist as an option.

Paramiko is great for starting out, but the OpenSSH type offers many advanced options. You will want to run Ansible from a machine new enough to support ControlPersist, if you are using this connection type.

You can still manage older clients. If you are using RHEL 6, CentOS 6, SLES 10 or SLES 11 the version of OpenSSH is still a bit old, so consider managing from a Fedora or openSUSE client even though you are managing older nodes, or just use paramiko.

We keep paramiko as the default as if you are first installing Ansible on an EL box, it offers a better experience for new users.

Ansible Interview Questions # How do I configure a jump host to access servers that I have no direct access to?

A) With Ansible 2, you can set a ProxyCommand in the ansible_ssh_common_args inventory variable. Any arguments specified in this variable are added to the sftp/scp/ssh command line when connecting to the relevant host(s). Consider the following inventory group:

[gatewayed]
foo ansible_host=192.0.2.1
bar ansible_host=192.0.2.2

You can create group_vars/gatewayed.yml with the following contents:

ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q [email protected]"'

Ansible will append these arguments to the command line when trying to connect to any hosts in the group gatewayed. (These arguments are used in addition to any ssh_args from ansible.cfg, so you do not need to repeat global ControlPersist settings in ansible_ssh_common_args.)

Note that ssh -W is available only with OpenSSH 5.4 or later. With older versions, it’s necessary to execute nc %h:%p or some equivalent command on the bastion host.

With earlier versions of Ansible, it was necessary to configure a suitable ProxyCommand for one or more hosts in ~/.ssh/config, or globally by setting ssh_args in ansible.cfg.

Ansible Interview Questions # How do I speed up management inside EC2?

A) Don’t try to manage a fleet of EC2 machines from your laptop. Connect to a management node inside EC2 first and run Ansible from there.

Advanced Ansible Interview Questions And Answers

Ansible Interview Questions # How do I handle python pathing not having a Python 2.X in /usr/bin/python on a remote machine?

A) While you can write ansible modules in any language, most ansible modules are written in Python, and some of these are important core ones.

By default, Ansible assumes it can find a /usr/bin/python on your remote system that is a 2.X version of Python, specifically 2.6 or higher.

Setting the inventory variable ‘ansible_python_interpreter’ on any host will allow Ansible to auto-replace the interpreter used when executing python modules.

Thus, you can point to any python you want on the system if /usr/bin/python on your system does not point to a Python 2.X interpreter.

Some Linux operating systems, such as Arch, may only have Python 3 installed by default. This is not sufficient and you will get syntax errors trying to run modules with Python 3. Python 3 is essentially not the same language as Python 2.

Python 3 support is being worked on but some Ansible modules are not yet ported to run under Python 3.0. This is not a problem though as you can just install Python 2 also on a managed host.

Do not replace the shebang lines of your python modules. Ansible will do this for you automatically at deploy time.

Ansible Interview Questions # What is the best way to make content reusable/redistributable?

A) If you have not done so already, read all about “Roles” in the playbooks documentation. This helps you make playbook content self-contained, and works well with things like git submodules for sharing content with others.

If some of these plugin types look strange to you, see the API documentation for more details about ways Ansible can be extended.

Ansible Interview Questions # Where does the configuration file live and what can I configure in it?

A) See Configuration file.

Ansible Interview Questions # How do I disable cowsay?

A) If cowsay is installed, Ansible takes it upon itself to make your day happier when running playbooks. If you decide that you would like to work in a professional cow-free environment, you can either uninstall cowsay, or set the ANSIBLE_NOCOWS environment variable:

export ANSIBLE_NOCOWS=1

Ansible Interview Questions # How do I see a list of all of the ansible_ variables?

A) Ansible by default gathers “facts” about the machines under management, and these facts can be accessed in Playbooks and in templates. To see a list of all of the facts that are available about a machine, you can run the “setup” module as an ad-hoc action:

ansible -m setup hostname

This will print out a dictionary of all of the facts that are available for that particular host. You might want to pipe the output to a pager.

Ansible Interview Questions And Answers For Experienced

Ansible Interview Questions # How do I see all the inventory vars defined for my host?

A) By running the following command, you can see vars resulting from what you’ve defined in the inventory:

ansible -m debug -a "var=hostvars['hostname']" localhost

Ansible Interview Questions # How do I loop over a list of hosts in a group, inside of a template?

A) A pretty common pattern is to iterate over a list of hosts inside of a host group, perhaps to populate a template configuration file with a list of servers. To do this, you can just access the “$groups” dictionary in your template, like this:

{% for host in groups['db_servers'] %}
    {{ host }}
{% endfor %}

If you need to access facts about these hosts, for instance, the IP address of each hostname, you need to make sure that the facts have been populated. For example, make sure you have a play that talks to db_servers:

- hosts:  db_servers
  tasks:
    - debug: msg="doesn't matter what you do, just that they were talked to previously."

Then you can use the facts inside your template, like this:

{% for host in groups['db_servers'] %}
   {{ hostvars[host]['ansible_eth0']['ipv4']['address'] }}
{% endfor %}

Ansible Interview Questions # How do I access a variable name programmatically?

A) An example may come up where we need to get the ipv4 address of an arbitrary interface, where the interface to be used may be supplied via a role parameter or other input. Variable names can be built by adding strings together, like so:

{{ hostvars[inventory_hostname]['ansible_' + which_interface]['ipv4']['address'] }}

The trick about going through hostvars is necessary because it’s a dictionary of the entire namespace of variables. ‘inventory_hostname’ is a magic variable that indicates the current host you are looping over in the host loop.

Ansible Interview Questions # How do I access a variable of the first host in a group?

A) What happens if we want the ip address of the first webserver in the webservers group? Well, we can do that too. Note that if we are using dynamic inventory, which host is the ‘first’ may not be consistent, so you wouldn’t want to do this unless your inventory is static and predictable. (If you are using Ansible Tower, it will use database order, so this isn’t a problem even if you are using cloud based inventory scripts).

Anyway, here’s the trick:

{{ hostvars[groups['webservers'][0]]['ansible_eth0']['ipv4']['address'] }}

Notice how we’re pulling out the hostname of the first machine of the webservers group. If you are doing this in a template, you could use the Jinja2 ‘#set’ directive to simplify this, or in a playbook, you could also use set_fact:

- set_fact: headnode={{ groups[['webservers'][0]] }}

- debug: msg={{ hostvars[headnode].ansible_eth0.ipv4.address }}

Notice how we interchanged the bracket syntax for dots – that can be done anywhere.

Ansible Devops Interview Questions And Answers

Ansible Interview Questions # How do I copy files recursively onto a target host?

A) The “copy” module has a recursive parameter. However, take a look at the “synchronize” module if you want to do something more efficient for a large number of files. The “synchronize” module wraps rsync. See the module index for info on both of these modules.

Ansible Interview Questions # How do I access shell environment variables?

A) If you just need to access existing variables, use the ‘env’ lookup plugin. For example, to access the value of the HOME environment variable on the management machine:

---
# ...
  vars:
     local_home: "{{ lookup('env','HOME') }}"

If you need to set environment variables, see the Advanced Playbooks section about environments.

Starting with Ansible 1.4, remote environment variables are available via facts in the ‘ansible_env’ variable:

{{ ansible_env.SOME_VARIABLE }}

Ansible Interview Questions # do I generate crypted passwords for the user module?

A) The mkpasswd utility that is available on most Linux systems is a great option:

mkpasswd --method=sha-512

If this utility is not installed on your system (e.g. you are using OS X) then you can still easily generate these passwords using Python. First, ensure that the Passlib password hashing library is installed:

pip install passlib

Once the library is ready, SHA512 password values can then be generated as follows:

python -c "from passlib.hash import sha512_crypt; import getpass; print sha512_crypt.using(rounds=5000).hash(getpass.getpass())"

Use the integrated Hashing filters to generate a hashed version of a password. You shouldn’t put plaintext passwords in your playbook or host_vars; instead, use Using Vault in playbooks to encrypt sensitive data.

Ansible Interview Questions # Is there a web interface / REST API / etc?

A) Yes! Ansible, Inc makes a great product that makes Ansible even more powerful and easy to use. See Ansible Tower.

Ansible Interview Questions # How do I keep secret data in my playbook?

A) If you would like to keep secret data in your Ansible content and still share it publicly or keep things in source control, see Using Vault in playbooks.

In Ansible 1.8 and later, if you have a task that you don’t want to show the results or command given to it when using -v (verbose) mode, the following task or playbook attribute can be useful:

- name: secret task
  shell: /usr/bin/do_something --value={{ secret_value }}
  no_log: True

This can be used to keep verbose output but hide sensitive information from others who would otherwise like to be able to see the output.

The no_log attribute can also apply to an entire play:

- hosts: all
  no_log: True

Though this will make the play somewhat difficult to debug. It’s recommended that this be applied to single tasks only, once a playbook is completed. Note that the use of the no_log attribute does not prevent data from being shown when debugging Ansible itself via the ANSIBLE_DEBUG environment variable.

Ansible Real Time Interview Questions And Answers

Ansible Interview Questions # When should I use {{ }}? Also, how to interpolate variables or dynamic variable names

A) A steadfast rule is ‘always use {{ }} except when when:‘. Conditionals are always run through Jinja2 as to resolve the expression, so when:failed_when: and changed_when: are always templated and you should avoid adding {{}}.

In most other cases you should always use the brackets, even if previously you could use variables without specifying (like with_ clauses), as this made it hard to distinguish between an undefined variable and a string.

Another rule is ‘moustaches don’t stack’. We often see this:

{{ somevar_{{other_var}} }}

The above DOES NOT WORK, if you need to use a dynamic variable use the hostvars or vars dictionary as appropriate:

{{ hostvars[inventory_hostname]['somevar_' + other_var] }}

Ansible Interview Questions # Why don’t you ship in X format?

A) Several reasons, in most cases it has to do with maintainability, there are tons of ways to ship software and it is a herculean task to try to support them all. In other cases there are technical issues, for example, for python wheels, our dependencies are not present so there is little to no gain.

References: Ansible Website

RELATED INTERVIEW QUESTIONS

  1. Accenture Java Interview Questions
  2. Advanced Java Interview Questions
  3. Core Java Interview Questions
  4. .NET Interview Questions
  5. Ansible Interview Questions
  6. ServiceNow Interview Questions
  7. RPA Interview Questions
  8. Blue Prism Interview Questions
  9. SSIS Interview Questions And Answers
  10. Oracle Performance Tuning Interview Questions
  11. SCCM Interview Questions
  12. ServiceNow Interview Questions
  13. SQL Interview Questions
  14. Docker Interview Questions