bregman-arie / devops-interview-questions
- пятница, 25 октября 2019 г. в 00:33:59
Groovy
Linux, Jenkins, AWS, SRE, Prometheus, Docker, Python, Ansible, Git, Kubernetes, Terraform, OpenStack, SQL, NoSQL, Azure, GCP
A development practice where developers integrate code into a shared repository frequently. It can range from a couple of changes every day or a week to a couple of changes in one hour in larger scales.
Each piece of code (change/patch) is verified, to make the change is safe to merge. Today, it's a common practice to test the change using an automated build that makes sure the code can integrated. It can be one build which runs several tests in different levels (unit, functional, etc.) or several separate builds that all or some has to pass in order for the change to be merged into the repository.
In your answer you can mention one or more of the following:
In mutable infrastructure paradigm, changes applied on top of the existing infrastructure and over time the infrastructure builds up a history of changes. Ansible, Puppet and Chef are examples to tools which follow mutable infrastructure paradigm.
In immutable infrastructure paradigm, every change is actually new infrastructure. So a change to a server will result in a new server instead of updating it. Terraform is an example of technology which follows the immutable infrastructure paradigm.
Stateless applications don't store any data in the host which makes it ideal for horizontal scaling and microservices. Stateful applications depend on the storage to save state and data, typically databases are stateful applications.
Alerts Tickets Logging
One can argue whether it's per company definition or a global one but at least according to a large companies, like Google for example, the SRE team is responsible for availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning of their services
Configuration drift happens when in an environment of servers with the exact same configuration and software, a certain server or servers are being applied with updates or configuration which other servers don't get and over time these servers become slightly different than all others.
This situation might lead to bugs which hard to identify and reproduce.
Note: cross-dependency is when you have two or more changes to separate projects and you would like to test them in mutual build instead of testing each change separately.
IAAS PAAS SAAS
Edge locations are basically content delivery network which caches data and insures lower latency and faster delivery to the users in any location. They are located in major cities in the world.
Stop the instance, the type of the instance to match the desired RAM and start the instance.
Application: user end (HTTP is here) Presentation: establishes context between application-layer entities (Encryption is here) Session: establishes, manages and terminates the connections Transport: transfers variable-length data sequences from a source to a destination host (TCP & UDP are here) Network: transfers datagrams from one network to another (IP is here) Data link: provides a link between two directly connected nodes (MAC is here) Physical: the electrical and physical spec the data connection (Bits are here)
Unitcast: One to one communication where there is one sender and one reciever.
Broadcast: Sending a message to everone in the network. The address ff:ff:ff:ff:ff:ff is used for broadcasting. Two common protocols which use broadcast are ARP and DHCP.
Multicast: Sending a message to a group of subscribers. It can be one-to-many or many-to-many.
CSMA/CD stands for Carrier Sense Multiple Access / Collision Detection. Its primarily focus it to manage access to shared medium/bus where only one host can transmit at a given point of time.
CSMA/CD algorithm:
TCP establishes a connection between the client and the server to guarantee the order of the packages, on the other hand, UDP does not establish a connection between client and server and doesn't handle package order. This makes UDP more lightweight than TCP and a perfect candidate for streaming services.
00110011110100011101
rm?)df you get "command not found". What could be wrong and how to fix it?You can use the commands cron and at.
With cron, tasks are scheduled using the following format:
The tasks are stored in a cron file.
Normally you will schedule batch jobs.
Using the chmod command.
777 - means you are lazy 644 - owner has read+write permissions and everyone else can only read 750 - owner can do anything, group can read and execute and others can do nothing
dstat -t is great for identifying network and disk issues.
netstat -tnlaup can be used to see which processes are running on which ports.
lsof -i -P can be used for the same purpose as netstat.
ngrep -d any metafilter for matching regex against paylods of packets.
tcpdump for capturing packets
wireshark same concept as tcpdump but with GUI (optional).
dstat -t is great for identifying network and disk issues.
opensnoop can be used to see which files are being opened on the system (in real time).
strace is great for understanding what your program does. It prints every system call your program executed.
top will show you how much CPU percentage each process consumes
perf is a great choice for sampling profiler and in general, figuring out what your CPU cycles are "wasted" on
flamegraphs is great for CPU consumption visualization (http://www.brendangregg.com/flamegraphs.html)
top if anything consumes your CPU or RAM.dstat -t to check if it's related to disk or network.iostat
grep '[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}' some_filegrep -E "error|failure" some_filegrep '[0-9]$' some_fileAn exit code (or return code) represents the code returned by a child process to its parent process.
0 is an exit code which represents success while anything higher than 1 represents error. Each number has different meaning, based on how the application was developed.
I consider this as a good blog post to read more about it: https://shapeshed.com/unix-exit-codes
hard link is the same file, using the same inode. soft link is a shortcut to another file, using a different inode.
soft links can be created between different file systems while hard link can be created only within the same file system.
You can achieve that by specifying & at end of the command. As to why, since some commands/processes can take a lot of time to finish execution or run forever
The default signal is SIGTERM (15). This signal kills process gracefully which means it allows it to save current state configuration.
SIGTERM - default signal for terminating a process SIGHUP - common usage is for reloading configuration SIGKILL - a signal which cannot caught or ignored
To view all available signals run kill -l
Running Waiting Stopped Terminated Zombie
strace does?ind /some_dir -iname *.yml -print0 | xargs -0 -r sed -i "s/1/2/g"
You can use the commands top and free
You can use the split command this way: split -l 25 some_file
In Linux (and Unix) the first three file descriptors are:
This is a great article on the topic: https://www.computerhope.com/jargon/f/file-descriptor.htm
For each file (and directory) in Linux there is an inode, a data structure which stores metadata related to the file like its size, owner, permissions, etc.
/etc/resolv.conf is used for? What does it include?While an A record points a domain name to an IP address, a PTR record does the opposite and resolves the IP address to a domain name.
ls? provide a detailed answeropen("/my/file") = 5
read(5, "file content")
These system calls are reading the file /my/file and 5 is the file descriptor number.
ip a you see there is a device called 'lo'. What is it and why do we need it?traceroute command does? How it works?There are a couple of ways to do that:
This is a good article about the topic: https://ops.tips/blog/how-linux-creates-sockets
Task – a call to a specific Ansible module Module – the actual unit of code executed by Ansible on your own host or a remote host. Modules are indexed by category (database, file, network, …) and also referred as task plugins.
Play – One or more tasks executed on a given host(s)
Playbook – One or more plays. Each play can be executed on the same or different hosts
Role – Ansible roles allows you to group resources based on certain functionality/service such that they can be easily reused. In a role, you have directories for variables, defaults, files, templates, handlers, tasks, and metadata. You can then use the role by simply specifying it in your playbook.
An inventory file defines hosts and/or groups of hosts on which Ansible tasks executed upon.
An example of inventory file:
192.168.1.2 192.168.1.3 192.168.1.4
[web_servers] 190.40.2.20 190.40.2.21 190.40.2.22
A dynamic inventory file tracks hosts from one or more sources like cloud providers and CMDB systems.
You should use one when using external sources and especially when the hosts in your environment are being automatically
spun up and shut down, without you tracking every change in these sources.
- name: Create a new directory
file:
path: "/tmp/new_directory"
state: directory
---
- name: Print information about my host
hosts: localhost
gather_facts: 'no'
tasks:
- name: Print hostname
debug:
msg: "It's me, {{ ansible_hostname }}"
When given a written code, always inspect it thoroughly. If your answer is “this will fail” then you are right. We are using a fact (ansible_hostname), which is a gathered piece of information from the host we are running on. But in this case, we disabled facts gathering (gather_facts: no) so the variable would be undefined which will result in failure.
---
- hosts: all
vars:
mario_file: /tmp/mario
package_list:
- 'zlib'
- 'vim'
tasks:
- name: Check for mario file
stat:
path: "{{ mario_file }}"
register: mario_f
- name: Install zlib and vim if mario file exists
become: "yes"
package:
name: "{{ item }}"
state: present
with_items: "{{ package_list }}"
when: mario_f.stat.exists
I'm <HOSTNAME> and my operating system is <OS>
replace and with the actual data for the specific host you are running on
The playbook to deploy the system_info file
---
- name: Deploy /tmp/system_info file
hosts: all:!controllers
tasks:
- name: Deploy /tmp/system_info
template:
src: system_info.j2
dest: /tmp/system_info
The content of the system_info.j2 template
# {{ ansible_managed }}
I'm {{ ansible_hostname }} and my operating system is {{ ansible_distribution }
According to variable precedence, which one will be used?
The right answer is ‘toad’.
Variable precedence is about how variables override each other when they set in different locations. If you didn’t experience it so far I’m sure at some point you will, which makes it a useful topic to be aware of.
In the context of our question, the order will be extra vars (always override any other variable) -> host facts -> inventory variables -> role defaults (the weakest).
A full list can be found at the link above. Also, note there is a significant difference between Ansible 1.x and 2.x.
def cap(self, string):
return string.capitalize()
A common wrong answer is to say that Ansible and Puppet are configuration management tools and Terraform is a provisioning tool. While technically true, it doesn't mean Ansible and Puppet can't be used for provisioning infrastructure. Also, it doesn't explain why Terraform should be used over CloudFormation if at all.
The benefits of Terraform over the other tools:
terraform.tfstate file is used for?It keeps track of the IDs of created resources so that Terraform knows what it is managing.
terraform initterraform planterraform validateterraform applyterraform init scans your code to figure which providers are you using and download them.
terraform plan will let you see what terraform is about to do before actually doing it.
terraform apply will provision the resources specified in the .tf files.
terraform apply?You use it this way: variable “my_var” {}
local-exec and remote-exec in the context of provisioners?It's a resource which was successfully created but failed during provisioning. Terraform will fail and mark this resource as "tainted".
terraform taint does?Strimg Integer Map List
terraform output does?The primary difference between containers and VMs is that containers allow you to virtualize multiple workloads on the operating system while in the case of VMs the hardware is being virtualized to run multiple machines each with its own OS.
You should choose VMs when:
You should choose containers when:
Docker CLI passes your request to Docker daemon. Docker daemon downloads the image from Docker Hub Docker daemon creates a new container by using the image it downloaded Docker daemon redirects output from container to Docker CLI which redirects it to the standard output
Docker Hub is a native Docker registry service which allows you to run pull and push commands to install and deploy Docker images from the Docker Hub.
Docker Cloud is built on top of the Docker Hub so Docker Cloud provides you with more options/features compared to Docker Hub. One example is Swarm management which means you can create new swarms in Docker Cloud.
1. It is a high level general purpose programming language created in 1991 by Guido Van Rosum.
2. The language is interpreted, being the CPython (Written in C) the most used/maintained implementation.
3. It is strongly typed. The typing discipline is duck typing and gradual.
4. Python focuses on readability and makes use of whitespaces/identation instead of brackets { }
5. The python packate manager is called PIP "pip installs packages", having more than 200.000 available packages.
6. Python comes with pip installed and a big standard library that offers the programmer many precooked solutions.
7. In python **Everything** is an object.
There are many other characteristics but these are the main ones that every python programmer should know.
The mutable data types are:
List
Dictionary
Set
The immutable data types are:
Numbers (int, float, ...)
String
Bool
Tuple
Frozenset
You can usually use the function hash() to check an object mutability, if it is hashable it is immutable, although this does not always work as intended as user defined objects might be mutable and hashable
PEP8 is a list of coding conventions and style guidelines for Python
5 style guidelines:
1. Limit all lines to a maximum of 79 characters.
2. Surround top-level function and class definitions with two blank lines.
3. Use commas when making a tuple of one element
4. Use spaces (and not tabs) for indentation
5. Use 4 spaces per indentation level
By definition inheritance is the mechanism where an object acts as a base of another object, retaining all its
properties.
So if Class B inherits from Class A, every characteristics from class A will be also available in class B.
Class A would be the 'Base class' and B class would be the 'derived class'.
This comes handy when you have several classes that share the same functionalities.
The basic syntax is:
class Base: pass
class Derived(Base): pass
A more forged example:
class Animal:
def __init__(self):
print("and I'm alive!")
def eat(self, food):
print("ñom ñom ñom", food)
class Human(Animal):
def __init__(self, name):
print('My name is ', name)
super().__init__()
def write_poem(self):
print('Foo bar bar foo foo bar!')
class Dog(Animal):
def __init__(self, name):
print('My name is', name)
super().__init__()
def bark(self):
print('woof woof')
michael = Human('Michael')
michael.eat('Spam')
michael.write_poem()
bruno = Dog('Bruno')
bruno.eat('bone')
bruno.bark()
>>> My name is Michael
>>> and I'm alive!
>>> ñom ñom ñom Spam
>>> Foo bar bar foo foo bar!
>>> My name is Bruno
>>> and I'm alive!
>>> ñom ñom ñom bone
>>> woof woof
Calling super() calls the Base method, thus, calling super().__init__() we called the Animal __init__.
There is a more advanced python feature called MetaClasses that aid the programmer to directly control class creation.
# Note that you generally don't need to know the compiling process but knowing where everything comes from
# and giving complete answers shows that you truly know what you are talking about.
Generally, every compiling process have a two steps.
- Analysis
- Code Generation.
Analysis can be broken into:
1. Lexycal analysis (Tokenizes source code)
2. Syntactic analysis (Check whether the tokens are legal or not, tldr, if syntax is correct)
for i in 'foo'
^
SyntaxError: invalid syntax
We missed ':'
3. Semantic analysis (Contextual analysis, legal syntax can still trigger errors, did you try to divide by 0,
hash a mutable object or use an undeclared function?)
1/0
ZeroDivisionError: division by zero
These three analysis steps are the responsible for error handlings.
The second step would be responsible for errors, mostly syntax errors, the most common error.
The third step would be responsible for Exceptions.
As we have seen, Exceptions are semantic errors, there are many builtin Exceptions:
ImportError
ValueError
KeyError
FileNotFoundError
IndentationError
IndexError
...
You can also have user defined Exceptions that have to inherit from the `Exception` class, directly or indirectly.
Basic example:
class DividedBy2Error(Exception):
def __init__(self, message):
self.message = message
def division(dividend,divisor):
if divisor == 2:
raise DividedBy2Error('I dont want you to divide by 2!')
return dividend / divisor
division(100, 2)
>>> __main__.DividedBy2Error: I dont want you to divide by 2!
Shortest way is str[::-1] but not the most efficient.
"Classic" way:
foo = ''
for char in 'pizza':
foo = char + foo
>> 'azzip'
First you ask the user for the amount of numbers that will be use. Use a while loop that runs until amount_of_numbers becomes 0 through subtracting amount_of_numbers by one each loop. In the while loop you want ask the user for a number which will be added a variable each time the loop runs.
def return_sum():
amount_of_numbers = int(input("How many numbers? "))
total_sum = 0
while amount_of_numbers != 0:
num = int(input("Input a number. "))
total_sum += num
amount_of_numbers -= 1
return total_sum
with open('file.txt', 'w') as file:
file.write("My insightful comment")
Using the re module
li = [[1, 4], [2, 1], [3, 9], [4, 2], [4, 5]]
sorted(x, key=lambda l: l[1])
[{'name': 'Mario', 'food': ['mushrooms', 'goombas']}, {'name': 'Luigi', 'food': ['mushrooms', 'turtles']}]
Extract all type of foods. Final output should be: {'mushrooms', 'goombas', 'turtles'}brothers_menu = \
[{'name': 'Mario', 'food': ['mushrooms', 'goombas']}, {'name': 'Luigi', 'food': ['mushrooms', 'turtles']}]
# "Classic" Way
def get_food(brothers_menu) -> set:
temp = []
for brother in brothers_menu:
for food in brother['food']:
temp.append(food)
return set(temp)
# One liner way (Using list comprehension)
set([food for bro in x for food in bro['food']])
Shortest way is: my_string[::-1] but it doesn't mean it's the most efficient one.
Cassic way is:
def reverse_string(string):
temp = ""
for char in string:
temp = char + temp
return temp
return returns?access, search insert and remove for the following data structures:Prometheus server responsible for scraping the storing the data
Push gateway is used for short-lived jobs
Alert manager is responsible for alerts ;)
git pull and git fetch?Shortly, git pull = git fetch + git merge
When you run git pull, it gets all the changes from the remote or central repository and attaches it to your corresponding branch in your local reposistory.
git fetch gets all the changes from the remote repository, stores the changes in a separate branch in your local repository
git directory, working directory and staging areaThe Git directory is where Git stores the metadata and object database for your project. This is the most important part of Git, and it is what is copied when you clone a repository from another computer.
The working directory is a single checkout of one version of the project. These files are pulled out of the compressed database in the Git directory and placed on disk for you to use or modify.
The staging area is a simple file, generally contained in your Git directory, that stores information about what will go into your next commit. It’s sometimes referred to as the index, but it’s becoming standard to refer to it as the staging area.
This answer taken from git-scm.com
First, you open the files which are in conflict and identify what are the conflicts. Next, based on what is accepted in your company or team, you either discuss with your colleagues on the conflicts or resolve them by yourself After resolving the conflicts, you add the files with `git add ` Finally, you run `git rebase --continue`
git reset and git revert?
git revert creates a new commit which undoes the changes from last commit.
git reset depends on the usage, can modify the index or change the commit which the branch head
is currently pointing at.
Using git rebase> command
git rebase?Mentioning two or three should be enough and it's probably good to mention that 'recursive' is the default one.
recursive resolve ours theirs
This page explains it the best: https://git-scm.com/docs/merge-strategies
git diff
git checkout HEAD~1 -- /path/of/the/file
Probably good to mention that it's:
This is a great article about Octopus merge: http://www.freblogg.com/2016/12/git-octopus-merge.html
Go also has good community.
var x int = 2 and x := 2?The result is the same, a variable with the value 2.
with var x int = 2 we are setting the variable type to integer while with x := 2 we are letting Go figure out by itself the type.
False. We can't redeclare variables but yes, we must used declared variables.
This should be answered based on your usage but some examples are:
func main() {
var x float32 = 13.5
var y int
y = x
}
package main
import "fmt"
func main() {
var x int = 101
var y string
y = string(x)
fmt.Println(y)
}
It looks what unicode value is set at 101 and uses it for converting the integer to a string.
If you want to get "101" you should use the package "strconv" and replace y = string(x) with y = strconv.Itoa(x)
package main
func main() {
var x = 2
var y = 3
const someConst = x + y
}
package main
import "fmt"
const (
x = iota
y = iota
)
const z = iota
func main() {
fmt.Printf("%v\n", x)
fmt.Printf("%v\n", y)
fmt.Printf("%v\n", z)
}
package main
import "fmt"
const (
_ = iota + 3
x
)
func main() {
fmt.Printf("%v\n", x)
}
The main difference is that SQL databases are structured (data is stored in the form of tables with rows and columns - like an excel spreadsheet table) while NoSQL is unstructured, and the data storage can vary depending on how the NoSQL DB is set up, such as key-value pair, document-oriented, etc.
db.books.find({"name": /abc/})db.books.find().sort({x:1})#!/bin/bashDepends on the language and settings used but in Bash for example, by default the script will keep running.
echo $0echo $?echo $$echo $@echo $#continue and break. When do you use them if at all?:(){ :|:& };:
A short way of using if/else. An example:
[[ $a = 1 ]] && b="yes, equal" || b="nope"
diff <(ls /tmp) <(ls /var/tmp)
| is not possible. It can be used when a command does not support STDIN or you need the output of multiple commands.
https://superuser.com/a/1060002/167769
Structured Query Language
The main difference is that SQL databases are structured (data is stored in the form of tables with rows and columns - like an excel spreadsheet table) while NoSQL is unstructured, and the data storage can vary depending on how the NoSQL DB is set up, such as key-value pair, document-oriented, etc.
ACID stands for Atomicity, Consistency, Isolation, Durability. In order to be ACID compliant, the database much meet each of the four criteria
Atomicity - When a change occurs to the database, it should either succeed or fail as a whole.
For example, if you were to update a table, the update should completely execute. If it only partially executes, the update is considered failed as a whole, and will not go through - the DB will revert back to it's original state before the update occurred. It should also be mentioned that Atomicity ensures that each transaction is completed as it's own stand alone "unit" - if any part fails, the whole statement fails.
Consistency - any change made to the database should bring it from one valid state into the next.
For example, if you make a change to the DB, it shouldn't corrupt it. Consistency is upheld by checks and constraints that are pre-defined in the DB. For example, if you tried to change a value from a string to an int when the column should be of datatype string, a consistent DB would not allow this transaction to go through, and the action would not be executed
Isolation - this ensures that a database will never be seen "mid-update" - as multiple transactions are running at the same time, it should still leave the DB in the same state as if the transactions were being run sequentially.
For example, let's say that 20 other people were making changes to the database at the same time. At the time you executed your query, 15 of the 20 changes had gone through, but 5 were still in progress. You should only see the 15 changes that had completed - you wouldn't see the database mid-update as the change goes through.
Durability - Once a change is committed, it will remain committed regardless of what happens (power failure, system crash, etc.). This means that all completed transactions must be recorded in non-voliatile memory.
Note that SQL is by nature ACID compliant. Certain NoSQL DB's can be ACID compliant depending on how they operate, but as a general rule of thumb, NoSQL DB's are not considered ACID compliant
SQL - Best used when data integrity is crucial. SQL is typically implemented with many businesses and areas within the finance field due to it's ACID compliance.
NoSQL - Great if you need to scale things quickly. NoSQL was designed with web applications in mind, so it works great if you need to quickly spread the same information around to multiple servers
Additionally, since NoSQL does not adhere to the strict table with columns and rows structure that Relational Databases require, you can store different data types together.
A Cartesian product is when all rows from the first table are joined to all rows in the second table. This can be done implicitly by not defining a key to join, or explicitly by calling a CROSS JOIN on two tables, such as below:
Select * from customers CROSS JOIN orders;
Note that a Cartesian product can also be a bad thing - when performing a join on two tables in which both do not have unique keys, this could cause the returned information to be incorrect.
For these questions, we will be using the Customers and Orders tables shown below:
Customers
| Customer_ID | Customer_Name | Items_in_cart | Cash_spent_to_Date |
|---|---|---|---|
| 100204 | John Smith | 0 | 20.00 |
| 100205 | Jane Smith | 3 | 40.00 |
| 100206 | Bobby Frank | 1 | 100.20 |
ORDERS
| Customer_ID | Order_ID | Item | Price | Date_sold |
|---|---|---|---|---|
| 100206 | A123 | Rubber Ducky | 2.20 | 2019-09-18 |
| 100206 | A123 | Bubble Bath | 8.00 | 2019-09-18 |
| 100206 | Q987 | 80-Pack TP | 90.00 | 2019-09-20 |
| 100205 | Z001 | Cat Food - Tuna Fish | 10.00 | 2019-08-05 |
| 100205 | Z001 | Cat Food - Chicken | 10.00 | 2019-08-05 |
| 100205 | Z001 | Cat Food - Beef | 10.00 | 2019-08-05 |
| 100205 | Z001 | Cat Food - Kitty quesadilla | 10.00 | 2019-08-05 |
| 100204 | X202 | Coffee | 20.00 | 2019-04-29 |
Select *
From Customers;
Select Items_in_cart
From Customers
Where Customer_Name = "John Smith";
Select SUM(Cash_spent_to_Date) as SUM_CASH
From Customers;
Select count(1) as Number_of_People_w_items
From Customers
where Items_in_cart > 0;
You would join them on the unique key. In this case, the unique key is Customer_ID in both the Customers table and Orders table
Select c.Customer_Name, o.Item
From Customers c
Left Join Orders o
On c.Customer_ID = o.Customer_ID;
with cat_food as (
Select Customer_ID, SUM(Price) as TOTAL_PRICE
From Orders
Where Item like "%Cat Food%"
Group by Customer_ID
)
Select Customer_name, TOTAL_PRICE
From Customers c
Inner JOIN cat_food f
ON c.Customer_ID = f.Customer_ID
where c.Customer_ID in (Select Customer_ID from cat_food);
Although this was a simple statement, the "with" clause really shines when a complex query needs to be run on a table before joining to another. With statements are nice, because you create a pseudo temp when running your query, instead of creating a whole new table.
The Sum of all the purchases of cat food weren't readily available, so we used a with statement to create the pseudo table to retrieve the sum of the prices spent by each customer, then join the table normally.
Scenarios are questions which don't have verbal answer and require you one of the following:
These questions usually given as an home task to the candidate and they can combine several topics together. Below you can find several scenario questions: