bregman-arie / devops-exercises
- пятница, 17 января 2020 г. в 00:18:59
Python
Linux, Jenkins, AWS, SRE, Prometheus, Docker, Python, Ansible, Git, Kubernetes, Terraform, OpenStack, SQL, NoSQL, Azure, GCP, DNS, Elastic, Network, Virtualization
Amazon:
"DevOps is the combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity: evolving and improving products at a faster pace than organizations using traditional software development and infrastructure management processes. This speed enables organizations to better serve their customers and compete more effectively in the market."
Microsoft:
"DevOps is the union of people, process, and products to enable continuous delivery of value to our end users. The contraction of “Dev” and “Ops” refers to replacing siloed Development and Operations to create multidisciplinary teams that now work together with shared and efficient practices and tools. Essential DevOps practices include agile planning, continuous integration, continuous delivery, and monitoring of applications."
Red Hat:
"DevOps describes approaches to speeding up the processes by which an idea (like a new software feature, a request for enhancement, or a bug fix) goes from development to deployment in a production environment where it can provide value to the user. These approaches require that development teams and operations teams communicate frequently and approach their work with empathy for their teammates. Scalability and flexible provisioning are also necessary. With DevOps, those that need power the most, get it—through self service and automation. Developers, usually coding in a standard development environment, work closely with IT operations to speed software builds, tests, and releases—without sacrificing reliability."
You should mention some or all of the following:
Make sure to elaborate :)
A development practice where developers integrate code into a shared repository frequently. It can range from a couple of changes every day or a week to a couple of changes in one hour in larger scales.
Each piece of code (change/patch) is verified, to make the change is safe to merge. Today, it's a common practice to test the change using an automated build that makes sure the code can integrated. It can be one build which runs several tests in different levels (unit, functional, etc.) or several separate builds that all or some has to pass in order for the change to be merged into the repository.
In your answer you can mention one or more of the following:
In mutable infrastructure paradigm, changes are applied on top of the existing infrastructure and over time the infrastructure builds up a history of changes. Ansible, Puppet and Chef are examples to tools which follow mutable infrastructure paradigm.
In immutable infrastructure paradigm, every change is actually new infrastructure. So a change to a server will result in a new server instead of updating it. Terraform is an example of technology which follows the immutable infrastructure paradigm.
Stateless applications don't store any data in the host which makes it ideal for horizontal scaling and microservices. Stateful applications depend on the storage to save state and data, typically databases are stateful applications.
Styling, unit, functional, API, integration, smoke, scenario, ...
You should be able to explain those that you mention.
It can be as simple as one Ansible (or other CM tool) task that runs periodically with Cron. In more advanced cases, perhaps a CI system.
Reliability, when used in DevOps context, is the ability of a system to recover from infrastructure failure or disruption. Part of it is also being able to scale based on your organization or team demands.
One can argue whether it's per company definition or a global one but at least according to a large companies, like Google for example, the SRE team is responsible for availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning of their services
Configuration drift happens when in an environment of servers with the exact same configuration and software, a certain server or servers are being applied with updates or configuration which other servers don't get and over time these servers become slightly different than all others.
This situation might lead to bugs which hard to identify and reproduce.
Note: cross-dependency is when you have two or more changes to separate projects and you would like to test them in mutual build instead of testing each change separately.
You can describe the UI way to add new slaves but better to explain how to do in a way that scales like a script or using dynamic source for slaves like one of the existing clouds.
IAAS PAAS SAAS
In cloud providers, someone else owns and manages the hardware, hire the relevant infrastructure teams and pays for real-estate (for both hardware and people). You can focus on your business.
In On-Premise solution, it's quite the opposite. You need to take care of hardware, infrastructure teams and pay for everything which can be quite expensive. On the other hand it's tailored to your needs.
The main idea behind serverless computing is that you don't need to manage the creation and configuration of server. All you need to focus on is splitting your app into multiple functions which will be triggered by some actions.
It's important to note that:
Within each region, there are multiple isolated locations known as Availability Zones. Multiple availability zones ensure high availability in case one of them goes down.
Edge locations are basically content delivery network which caches data and insures lower latency and faster delivery to the users in any location. They are located in major cities in the world.
True
A way for allowing a service of AWS to use another service of AWS. You assign roles to AWS resources.
Policies documents used to give permissions as to what a user, group or role are able to do. Their format is JSON.
Stop the instance, the type of the instance to match the desired RAM and start the instance.
True
A transport solution which was designed for transferring large amounts of data (petabyte-scale) into and out the AWS cloud.
True
Amazon Elasticache is a fully managed Redis or Memcached in-memory data store.
It's great for use cases like two-tier web applications where the most frequently accesses data is stored in ElastiCache so response time is optimal.
A MySQL & Postgresql based relational database. Great for use cases like two-tier web applications that has a MySQL or Postgresql database layer and you need automated backups for your application.
CloudFormation
Cognito
Lightsail
Cost Explorer
Trusted Advisor
AWS Snowball
VPC
Amazon Aurora
AWS Database Migration Service
AWS CloudTrail
Application: user end (HTTP is here) Presentation: establishes context between application-layer entities (Encryption is here) Session: establishes, manages and terminates the connections Transport: transfers variable-length data sequences from a source to a destination host (TCP & UDP are here) Network: transfers datagrams from one network to another (IP is here) Data link: provides a link between two directly connected nodes (MAC is here) Physical: the electrical and physical spec the data connection (Bits are here)
Unitcast: One to one communication where there is one sender and one receiver.
Broadcast: Sending a message to everyone in the network. The address ff:ff:ff:ff:ff:ff is used for broadcasting. Two common protocols which use broadcast are ARP and DHCP.
Multicast: Sending a message to a group of subscribers. It can be one-to-many or many-to-many.
CSMA/CD stands for Carrier Sense Multiple Access / Collision Detection. Its primarily focus it to manage access to shared medium/bus where only one host can transmit at a given point of time.
CSMA/CD algorithm:
TCP establishes a connection between the client and the server to guarantee the order of the packages, on the other hand, UDP does not establish a connection between client and server and doesn't handle package order. This makes UDP more lightweight than TCP and a perfect candidate for streaming services.
00110011110100011101
An open question. Answer based on your real experience. You can highlight one or more of the following:
ls
rm
rmdir (can you achieve the same result by using rm?)
grep
wc
curl
touch
man
nslookup or dig
df
ls - list files and directories. You can highlight common flags like -d, -a, -l, ...
rm - remove files and directories. You should mention -r for recursive removal
rmdir - remove directories but you should mention it's possible to use rm for that
grep - print lines that match patterns. Could be nice to mention -v, -r, -E flags
wc - print newline, word, and byte counts
curl - tranfer a URL or mention common usage like downloading files, API calls, ...
touch - update timestamps but common usage is to create files
man - reference manuals
nslookup or dig - query nameservers
df - provides info regarding file system disk space usage
df you get "command not found". What could be wrong and how to fix it?
Most likely the default/generated $PATH was somehow modified or overridden thus not containing /bin/ where df would normally go.
This issue could also happen if bash_profile or any configuration file of your interpreter was wrongly modified, causing erratics behaviours.
You would solve this by fixing your $PATH variable:
As to fix it there are serveral options:
PATH="$PATH":/user/bin:/..etcNote: There are many ways of getting errors like this: if bash_profile or any configuration file of your interpreter was wrongly modified; causing erratics behaviours, permissions issues, bad compiled software (if you compiled it by yourself)... there is no answer that will be true 100% of the time.
You can use the commands cron and at.
With cron, tasks are scheduled using the following format:
*/30 * * * * bash myscript.sh Executes the script every 30 minutes.
The tasks are stored in a cron file, you can write in it using crontab -e
Alternatively if you are using a distro with systemd it's recommended to use systemd timers.
Normally you will schedule batch jobs.
Using the chmod command.
777 - You give the owner, group and other: Execute (1), Write (2) and Read (4); 4+2+1 = 7. 644 - Owner has Read (4), Write (2), 4+2 = 6; Group and Other have Read (4). 750 - Owner has x+r+w, Group has Read (4) and Execute (1); 4+1 = 5. Other have no permissions.
A daemon is a program that runs in the background without direct control of the user, although the user can at any time talk to the daemon.
systemd has many features such as user processes control/tracking, snapshot support, inhibitor locks..
If we visualize the unix/linux system in layers, systemd would fall directly after the linux kernel.
Hardware -> Kernel -> Daemons, System Libraries, Server Display.
journalctl
dstat -t is great for identifying network and disk issues.
netstat -tnlaup can be used to see which processes are running on which ports.
lsof -i -P can be used for the same purpose as netstat.
ngrep -d any metafilter for matching regex against payloads of packets.
tcpdump for capturing packets
wireshark same concept as tcpdump but with GUI (optional).
dstat -t is great for identifying network and disk issues.
opensnoop can be used to see which files are being opened on the system (in real time).
strace is great for understanding what your program does. It prints every system call your program executed.
top will show you how much CPU percentage each process consumes
perf is a great choice for sampling profiler and in general, figuring out what your CPU cycles are "wasted" on
flamegraphs is great for CPU consumption visualization (http://www.brendangregg.com/flamegraphs.html)
top for anything unusualdstat -t to check if it's related to disk or network.sariostat
grep '[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}' some_filegrep -E "error|failure" some_filegrep '[0-9]$' some_fileAnother way to ask this: what happens from the moment you turned on the server until you get a prompt
An exit code (or return code) represents the code returned by a child process to its parent process.
0 is an exit code which represents success while anything higher than 1 represents error. Each number has different meaning, based on how the application was developed.
I consider this as a good blog post to read more about it: https://shapeshed.com/unix-exit-codes
For each file (and directory) in Linux there is an inode, a data structure which stores meta data related to the file like its size, owner, permissions, etc.
Hard link is the same file, using the same inode. Soft link is a shortcut to another file, using a different inode.
False
True
sed 's/1/2/g' /tmp/myFile find . -iname "*.yaml" -exec sed -i "s/1/2/g" {} \;
You can achieve that by specifying & at end of the command. As to why, since some commands/processes can take a lot of time to finish execution or run forever
The default signal is SIGTERM (15). This signal kills process gracefully which means it allows it to save current state configuration.
SIGTERM - default signal for terminating a process SIGHUP - common usage is for reloading configuration SIGKILL - a signal which cannot caught or ignored
To view all available signals run kill -l
kill 0 does?kill -0 does?Running (R) Uninterruptible Sleep (D) - The process is waiting for I/O Interruptible Sleep (S) Stopped (T) Dead (x) Zombie (z)
A process which has finished to run but has not exited.
One reason it happens is when a parent process is programmed incorrectly. Every parent process should execute wait() to get the exit code from the child process which finished to run. But when the parent isn't checking for the child exit code, the child process can still exists although it finished to run.
You can't kill a zombie process the regular way with kill -9 for example as it's already dead.
One way to kill zombie process is by sending SIGCHLD to the parent process telling it to terminate its child processes. This might not work if the parent process wasn't programmed properly. The invocation is kill -s SIGCHLD [parent_pid]
You can also try closing/terminating the parent process. This will make the zombie process a child of init (1) which does periodic cleanups and will at some point clean up the zombie process.
If you mention at any point ps command with arugments, be familiar with what these arguments does exactly.
strace does? What about ltrace?find /some_dir -iname *.yml -print0 | xargs -0 -r sed -i "s/1/2/g"
You can use the commands top and free
The ls executable is built for an incompatible architecture.
You can use the split command this way: split -l 25 some_file
In Linux (and Unix) the first three file descriptors are:
This is a great article on the topic: https://www.computerhope.com/jargon/f/file-descriptor.htm
One of the following would work:
netstat -tnlp | grep <port_number>
lsof -i -n -P | grep <port_number>
Technically, yes.
SSH
HTTP
DNS
HTTPS
SSH - 22
HTTP - 80
DNS - 53
HTTPS - 443
Using nc is one way
One way would be ping6 ff02::1
/etc/resolv.conf is used for? What does it include?You can specify one or more of the following:
dignslookup
Depends on the init system.
Systemd: systemctl enable [service_name]
System V: update-rc.d [service_name] and add this line id:5678:respawn:/bin/sh /path/to/app to /etc/inittab
Upstart: add Upstart init script at /etc/init/service.conf
ssh 127.0.0.1 but it fails with "connection refused". What could be the problem?Re-install the OS IS NOT the right answer :)
ls, wc, dd, df, du, ps, ip, cp, cd ...
It's used in commands to mark the end of commands options. One common example is when used with git to discard local changes: git checkout -- some_file
lsof command does? Have you used it? What for?awk command does? Have you used it? What for?fork() is used for creating a new process. It does so by cloning the calling process but the child process has its own PID and any memory locks, I/O operations and semaphores are not inherited.
wait() is used by a parent process to wait for the child process to finish execution. If wait is not used by a parent process then a child process might become a zombie process.
Executes a program. The program is passed as a filename (or path) and must be a binary executable or a script.
ls -l?Shell reads the input using getline() which reads the input file stream and stores into a buffer as a string
The buffer is broken down into tokens and stored in an array this way: {"ls", "-l", "NULL"}
Shell checks if an expansion is required (in case of ls *.c)
Once the program in memory, its execution starts. First by calling readdir()
Notes:
ls -l *.log?There are a couple of ways to do that:
open("/my/file") = 5
read(5, "file content")
These system calls are reading the file /my/file and 5 is the file descriptor number.
ip a you see there is a device called 'lo'. What is it and why do we need it?traceroute command does? How does it works?Another common way to task this questions is "what part of the tcp header does traceroute modify?"
This is a good article about the topic: https://ops.tips/blog/how-linux-creates-sockets
MemFree - The amount of unused physical RAM in your system MemAvailable - The amount of available memory for new workloads (without pushing system to use swap) based on MemFree, Active(file), Inactive(file), and SReclaimable.
There are many ways to answer that. For those who look for simplicity, the book "Operating Systems: Three Easy Pieces" offers nice version:
"responsible for making it easy to run programs (even allowing you to seemingly run many at the same time), allowing programs to share memory, enabling programs to interact with devices, and other fun stuff like that"
A process is a running program. A program is one or more instructions and the program (or process) is executed by the operating system.
It would support the following:
Note: The loading of the program's code into the memory done lazily which means the OS loads only partial relevant pieces required for the process to run and not the entire code.
False. It was true in the past but today's operating systems perform lazy loading which means only the relevant pieces required for the process to run are loaded first.
Buffer: Reserved place in RAM which is used to hold data for temporary purposes Cache: Cache is usually used when processes reading and writing to the disk to make the process faster by making similar data used by different programs easily accessible.
Even when using a system with one physical CPU, it's possible to allow multiple users to work on it and run programs. This is possible with time sharing where computing resources are shared in a way it seems to the user the system has multiple CPUs but in fact it's simply one CPU shared by applying multiprogramming and multi-tasking.
Somewhat the opposite of time sharing. While in time sharing a resource is used for a while by one entity and then the same resource can be used by another resource, in space sharing the space is shared by multiple entities but in a way it's not being transfered between them.
It's used by one entity until this entity decides to get rid of it. Take for example storage. In storage, a file is your until you decide to delete it.
Task – a call to a specific Ansible module Module – the actual unit of code executed by Ansible on your own host or a remote host. Modules are indexed by category (database, file, network, …) and also referred as task plugins.
Play – One or more tasks executed on a given host(s)
Playbook – One or more plays. Each play can be executed on the same or different hosts
Role – Ansible roles allows you to group resources based on certain functionality/service such that they can be easily reused. In a role, you have directories for variables, defaults, files, templates, handlers, tasks, and metadata. You can then use the role by simply specifying it in your playbook.
Ansible is:
An inventory file defines hosts and/or groups of hosts on which Ansible tasks executed upon.
An example of inventory file:
192.168.1.2 192.168.1.3 192.168.1.4
[web_servers] 190.40.2.20 190.40.2.21 190.40.2.22
A dynamic inventory file tracks hosts from one or more sources like cloud providers and CMDB systems.
You should use one when using external sources and especially when the hosts in your environment are being automatically
spun up and shut down, without you tracking every change in these sources.
- name: Create a new directory
file:
path: "/tmp/new_directory"
state: directory
---
- name: Print information about my host
hosts: localhost
gather_facts: 'no'
tasks:
- name: Print hostname
debug:
msg: "It's me, {{ ansible_hostname }}"
When given a written code, always inspect it thoroughly. If your answer is “this will fail” then you are right. We are using a fact (ansible_hostname), which is a gathered piece of information from the host we are running on. But in this case, we disabled facts gathering (gather_facts: no) so the variable would be undefined which will result in failure.
---
- hosts: all
vars:
mario_file: /tmp/mario
package_list:
- 'zlib'
- 'vim'
tasks:
- name: Check for mario file
stat:
path: "{{ mario_file }}"
register: mario_f
- name: Install zlib and vim if mario file exists
become: "yes"
package:
name: "{{ item }}"
state: present
with_items: "{{ package_list }}"
when: mario_f.stat.exists
I'm <HOSTNAME> and my operating system is <OS>
Replace and with the actual data for the specific host you are running on
The playbook to deploy the system_info file
---
- name: Deploy /tmp/system_info file
hosts: all:!controllers
tasks:
- name: Deploy /tmp/system_info
template:
src: system_info.j2
dest: /tmp/system_info
The content of the system_info.j2 template
# {{ ansible_managed }}
I'm {{ ansible_hostname }} and my operating system is {{ ansible_distribution }
According to variable precedence, which one will be used?
The right answer is ‘toad’.
Variable precedence is about how variables override each other when they set in different locations. If you didn’t experience it so far I’m sure at some point you will, which makes it a useful topic to be aware of.
In the context of our question, the order will be extra vars (always override any other variable) -> host facts -> inventory variables -> role defaults (the weakest).
A full list can be found at the link above. Also, note there is a significant difference between Ansible 1.x and 2.x.
def cap(self, string):
return string.capitalize()
Goku = 9001
Vegeta = 5200
Trunks = 6000
Gotenks = 32
With one task, switch the content to:
Goku = 9001
Vegeta = 250
Trunks = 40
Gotenks = 32
- name: Change saiyans levels
lineinfile:
dest: /tmp/exercise
regexp: "{{ item.regexp }}"
line: "{{ item.line }}"
with_items:
- { regexp: '^Vegeta', line: 'Vegeta = 250' }
- { regexp: '^Trunks', line: 'Trunks = 40' }
...
A common wrong answer is to say that Ansible and Puppet are configuration management tools and Terraform is a provisioning tool. While technically true, it doesn't mean Ansible and Puppet can't be used for provisioning infrastructure. Also, it doesn't explain why Terraform should be used over CloudFormation if at all.
The benefits of Terraform over the other tools:
terraform.tfstate file is used for?It keeps track of the IDs of created resources so that Terraform knows what it is managing.
terraform initterraform planterraform validateterraform applyterraform init scans your code to figure which providers are you using and download them.
terraform plan will let you see what terraform is about to do before actually doing it.
terraform apply will provision the resources specified in the .tf files.
terraform apply?You use it this way: variable “my_var” {}
local-exec and remote-exec in the context of provisioners?It's a resource which was successfully created but failed during provisioning. Terraform will fail and mark this resource as "tainted".
terraform taint does?String Integer Map List
terraform output does?remote-exec and local-execThe primary difference between containers and VMs is that containers allow you to virtualize multiple workloads on the operating system while in the case of VMs the hardware is being virtualized to run multiple machines each with its own OS.
You should choose VMs when:
You should choose containers when:
Docker CLI passes your request to Docker daemon. Docker daemon downloads the image from Docker Hub Docker daemon creates a new container by using the image it downloaded Docker daemon redirects output from container to Docker CLI which redirects it to the standard output
docker run
Create a new image from a container’s changes
Docker can build images automatically by reading the instructions from a Dockerfile. A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image.
COPY takes in a src and destination. It only lets you copy in a local file or directory from your host (the machine building the Docker image) into the Docker image itself. ADD lets you do that too, but it also supports 2 other sources. First, you can use a URL instead of a local file / directory. Secondly, you can extract a tar file from the source directly into the destination. Although ADD and COPY are functionally similar, generally speaking, COPY is preferred. That’s because it’s more transparent than ADD. COPY only supports the basic copying of local files into the container, while ADD has some features (like local-only tar extraction and remote URL support) that are not immediately obvious.
RUN lets you execute commands inside of your Docker image. These commands get executed once at build time and get written into your Docker image as a new layer. CMD is the command the container executes by default when you launch the built image. A Dockerfile can only have one CMD. You could say that CMD is a Docker run-time operation, meaning it’s not something that gets executed at build time. It happens when you run an image. A running image is called a container.
A common answer to this is to use hadolint project which is a linter based on Dockerfile best practices.
Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.
Docker Hub is a native Docker registry service which allows you to run pull and push commands to install and deploy Docker images from the Docker Hub.
Docker Cloud is built on top of the Docker Hub so Docker Cloud provides you with more options/features compared to Docker Hub. One example is Swarm management which means you can create new swarms in Docker Cloud.
A Docker image is built up from a series of layers. Each layer represents an instruction in the image’s Dockerfile. Each layer except the very last one is read-only. Each layer is only a set of differences from the layer before it. The layers are stacked on top of each other. When you create a new container, you add a new writable layer on top of the underlying layers. This layer is often called the “container layer”. All changes made to the running container, such as writing new files, modifying existing files, and deleting files, are written to this thin writable container layer. The major difference between a container and an image is the top writable layer. All writes to the container that add new or modify existing data are stored in this writable layer. When the container is deleted, the writable layer is also deleted. The underlying image remains unchanged. Because each container has its own writable container layer, and all changes are stored in this container layer, multiple containers can share access to the same underlying image and yet have their own data state.
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.
To understand what Kubernetes is good for, let's look at some examples:
You would like to run a certain application in a container on multiple different locations. Sure, if it's 2-3 servers/locations, you can do it by yourself but it can be challenging to scale. Also, running them is not only running the container but also react on different events.
Performing updates and changes across hundreds of containers
Handle cases where the current load requires to scale up (or down)
A cluster consists of a Master (which coordinates the cluster) and Nodes where the applications are running.
The master coordinates all the workflows in the cluster:
A node is a virtual machine or a physical server that serves as a worker for running the applications. It's recommended to have at least 3 nodes in Kubernetes production environment.
Kubelet is an agent running on each node and responsible for node communication with the master.
Minikube is a lightweight Kubernetes implementation. It create a local virtual machine and deploys a simple (single node) cluster.
Start by inspecting the pods status. we can use the command kubectl get pods (--all-namespaces for pods in system namespace)
If we see "Error" status, we can keep debugging by running the command kubectl describe pod [name]. In case we still don't see anything useful we can try stern for log tailing.
In case we find out there was a temporary issue with the pod or the system, we can try restarting the pod with the following kubectl scale deployment [name] --replicas=0
Setting the replicas to 0 will shut down the process. Now start it with kubectl scale deployment [name] --replicas=1
An expression is anything that results in a value (even if the value is None). Basically, any sequence of literals so, you can say that a string, integer, list, ... are all expressions.
Statements are instructions executed by the interpreter like variable assignments, for loops and conditionals (if-else).
SOLID design principals are about:
SOLID is:
It's a search algorithm used with sorted arrays/lists to find a target value by dividing the array each iteration and comparing the middle value to the target value. If the middle value is smaller than target value, then the target value is searched in the right part of the divided array, else in the left side. This continues until the value is found (or the array divided max times)
The average performance of the above algorithm is O(log n). Best performance can be O(1) and worst O(log n).
access, search insert and remove for the following data structures:def find_triplets_sum_to_zero(li):
li = sorted(li)
for i, val in enumerate(li):
low, up = 0, len(li)-1
while low < i < up:
tmp = var + li[low] + li[up]
if tmp > 0:
up -= 1
elif tmp < 0:
low += 1
else:
yield li[low], val, li[up]
low += 1
up -= 1
1. It is a high level general purpose programming language created in 1991 by Guido Van Rosum.
2. The language is interpreted, being the CPython (Written in C) the most used/maintained implementation.
3. It is strongly typed. The typing discipline is duck typing and gradual.
4. Python focuses on readability and makes use of whitespaces/identation instead of brackets { }
5. The python package manager is called PIP "pip installs packages", having more than 200.000 available packages.
6. Python comes with pip installed and a big standard library that offers the programmer many precooked solutions.
7. In python **Everything** is an object.
There are many other characteristics but these are the main ones that every python programmer should know.
List
Dictionary
Set
Numbers (int, float, ...)
String
Bool
Tuple
Frozenset
Mutability determines whether you can modify an object of specific type.
The mutable data types are:
List
Dictionary
Set
The immutable data types are:
Numbers (int, float, ...)
String
Bool
Tuple
Frozenset
You can usually use the function hash() to check an object mutability. If an object is hashable, it is immutable (although this does not always work as intended as user defined objects might be mutable and hashable).
In general, first class objects in programming languages are objects which can be assigned to variable, used as a return value and can be used as arguments or parameters.
In python you can treat functions this way. Let's say we have the following function
def my_function():
return 5
You can then assign a function to a variables like this x = my_function or you can return functions as return values like this return my_function
[] is not []? explain the resultIt evaluates to True.
The reason is that the two created empty list are different objects. x is y only evaluates to true when x and y are the same object.
By definition inheritance is the mechanism where an object acts as a base of another object, retaining all its
properties.
So if Class B inherits from Class A, every characteristics from class A will be also available in class B.
Class A would be the 'Base class' and B class would be the 'derived class'.
This comes handy when you have several classes that share the same functionalities.
The basic syntax is:
class Base: pass
class Derived(Base): pass
A more forged example:
class Animal:
def __init__(self):
print("and I'm alive!")
def eat(self, food):
print("ñom ñom ñom", food)
class Human(Animal):
def __init__(self, name):
print('My name is ', name)
super().__init__()
def write_poem(self):
print('Foo bar bar foo foo bar!')
class Dog(Animal):
def __init__(self, name):
print('My name is', name)
super().__init__()
def bark(self):
print('woof woof')
michael = Human('Michael')
michael.eat('Spam')
michael.write_poem()
bruno = Dog('Bruno')
bruno.eat('bone')
bruno.bark()
>>> My name is Michael
>>> and I'm alive!
>>> ñom ñom ñom Spam
>>> Foo bar bar foo foo bar!
>>> My name is Bruno
>>> and I'm alive!
>>> ñom ñom ñom bone
>>> woof woof
Calling super() calls the Base method, thus, calling super().__init__() we called the Animal __init__.
There is a more advanced python feature called MetaClasses that aid the programmer to directly control class creation.
In the following block of code x is a class attribute while self.y is a instance attribute
class MyClass(object):
x = 1
def __init__(self, y):
self.y = y
# Note that you generally don't need to know the compiling process but knowing where everything comes from
# and giving complete answers shows that you truly know what you are talking about.
Generally, every compiling process have a two steps.
- Analysis
- Code Generation.
Analysis can be broken into:
1. Lexical analysis (Tokenizes source code)
2. Syntactic analysis (Check whether the tokens are legal or not, tldr, if syntax is correct)
for i in 'foo'
^
SyntaxError: invalid syntax
We missed ':'
3. Semantic analysis (Contextual analysis, legal syntax can still trigger errors, did you try to divide by 0,
hash a mutable object or use an undeclared function?)
1/0
ZeroDivisionError: division by zero
These three analysis steps are the responsible for error handlings.
The second step would be responsible for errors, mostly syntax errors, the most common error.
The third step would be responsible for Exceptions.
As we have seen, Exceptions are semantic errors, there are many builtin Exceptions:
ImportError
ValueError
KeyError
FileNotFoundError
IndentationError
IndexError
...
You can also have user defined Exceptions that have to inherit from the `Exception` class, directly or indirectly.
Basic example:
class DividedBy2Error(Exception):
def __init__(self, message):
self.message = message
def division(dividend,divisor):
if divisor == 2:
raise DividedBy2Error('I dont want you to divide by 2!')
return dividend / divisor
division(100, 2)
>>> __main__.DividedBy2Error: I dont want you to divide by 2!
x, y = y, x
First you ask the user for the amount of numbers that will be use. Use a while loop that runs until amount_of_numbers becomes 0 through subtracting amount_of_numbers by one each loop. In the while loop you want ask the user for a number which will be added a variable each time the loop runs.
def return_sum():
amount_of_numbers = int(input("How many numbers? "))
total_sum = 0
while amount_of_numbers != 0:
num = int(input("Input a number. "))
total_sum += num
amount_of_numbers -= 1
return total_sum
li = [2, 5, 6]
print("{0:.3f}".format(sum(li)/len(li)))
Maximum: max(some_list)
Minimum: min(some_list)
Last item: some_list[-1]
sorted(some_list, reverse=True)[:3]
Or
some_list.sort(reverse=True)
some_list[:3]
sorted_li = sorted(li, key=len)
Or without creating a new list:
li.sort(key=len)
sorted(list) will return a new list (original list doesn't change)
list.sort() will return None but the list is change in-place
sorted() works on any iterable (Dictionaries, Strings, ...)
list.sort() is faster than sorted(list) in case of Lists
[['1', '2', '3'], ['4', '5', '6']]nested_li = [['1', '2', '3'], ['4', '5', '6']]
[[int(x) for x in li] for li in nested_li]
sorted(li1 + li2)
Another way:
i, j = 0
merged_li = []
while i < len(li1) and j < len(li2):
if li1[i] < li2[j]:
merged_li.append(li1[i])
i += 1
else:
merged_li.append(li2[j])
j += 1
merged_li = merged_li + merged_li[i:] + merged_li[j:]
There are many ways of solving this problem:
# Note: :list and -> bool are just python typings, they are not needed for the correct execution of the algorithm.
Taking advantage of sets and len:
def is_unique(l:list) -> bool:
return len(set(l)) == len(l)
This one is can be seen used in other programming languages.
def is_unique2(l:list) -> bool:
seen = []
for i in l:
if i in seen:
return False
seen.append(i)
return True
Here we just count and make sure every element is just repeated once.
def is_unique3(l:list) -> bool:
for i in l:
if l.count(i) > 1:
return False
return True
This one might look more convulated but hey, one liners.
def is_unique4(l:list) -> bool:
return all(map(lambda x: l.count(x) < 2, l))
def my_func(li = []):
li.append("hmm")
print(li)
If we call it 3 times, what would be the result each call?
['hmm']
['hmm', 'hmm']
['hmm', 'hmm', 'hmm']
Method 1
for i in reversed(li):
...
Method 2
n = len(li) - 1
while n > 0:
...
n -= 1
li = [[1, 4], [2, 1], [3, 9], [4, 2], [4, 5]]
sorted(li, key=lambda l: l[1])
or
li.sort(key=lambda l: l[1)
nums = [1, 2, 3]
letters = ['x', 'y', 'z']
list(zip(nums, letters))
{k: v for k, v in sorted(x.items(), key=lambda item: item[1])}
dict(sorted(some_dictionary.items()))
some_dict1.update(some_dict2)
with open('file.txt', 'w') as file:
file.write("My insightful comment")
Using the re module
[{'name': 'Mario', 'food': ['mushrooms', 'goombas']}, {'name': 'Luigi', 'food': ['mushrooms', 'turtles']}]
Extract all type of foods. Final output should be: {'mushrooms', 'goombas', 'turtles'}brothers_menu = \
[{'name': 'Mario', 'food': ['mushrooms', 'goombas']}, {'name': 'Luigi', 'food': ['mushrooms', 'turtles']}]
# "Classic" Way
def get_food(brothers_menu) -> set:
temp = []
for brother in brothers_menu:
for food in brother['food']:
temp.append(food)
return set(temp)
# One liner way (Using list comprehension)
set([food for bro in x for food in bro['food']])
x = "itssssssameeeemarioooooo"
y = ''.join(set(x))
def permute_string(string):
if len(string) == 1:
return [string]
permutations = []
for i in range(len(string)):
swaps = permute_string(string[:i] + string[(i+1):])
for swap in swaps:
permutations.append(string[i] + swap)
return permutations
print(permute_string("abc"))
Short way (but probably not acceptable in interviews):
from itertools import permutations
[''.join(p) for p in permutations("abc")]
Detailed answer can be found here: http://codingshell.com/python-all-string-permutations
>> ', '.join(["One", "Two", "Three"])
>> " ".join("welladsadgadoneadsadga".split("adsadga")[:2])
>> "".join(["c", "t", "o", "a", "o", "q", "l"])[0::2]
>>> 'One, Two, Three'
>>> 'well done'
>>> 'cool'
Shortest way is:
my_string[::-1]
But it doesn't mean it's the most efficient one.
The Classic way is:
def reverse_string(string):
temp = ""
for char in string:
temp = char + temp
return temp
"".join(["a", "h", "m", "a", "h", "a", "n", "q", "r", "l", "o", "i", "f", "o", "o"])[2::3]mario
yeild? When would you use it?[['Mario', 90], ['Geralt', 82], ['Gordon', 88]] How to sort the list by the numbers in the nested lists?One way is:
the_list.sort(key=lambda x: x[1])
pdb :D
return returns?Short answer is: It returns a None object.
We could go a bit deeper and explain the difference between
def a ():
return
>>> None
And
def a ():
pass
>>> None
Or we could be asked this as a following question, since they both give the same result.
We could use the dis module to see what's going on:
2 0 LOAD_CONST 0 (<code object a at 0x0000029C4D3C2DB0, file "<dis>", line 2>)
2 LOAD_CONST 1 ('a')
4 MAKE_FUNCTION 0
6 STORE_NAME 0 (a)
5 8 LOAD_CONST 2 (<code object b at 0x0000029C4D3C2ED0, file "<dis>", line 5>)
10 LOAD_CONST 3 ('b')
12 MAKE_FUNCTION 0
14 STORE_NAME 1 (b)
16 LOAD_CONST 4 (None)
18 RETURN_VALUE
Disassembly of <code object a at 0x0000029C4D3C2DB0, file "<dis>", line 2>:
3 0 LOAD_CONST 0 (None)
2 RETURN_VALUE
Disassembly of <code object b at 0x0000029C4D3C2ED0, file "<dis>", line 5>:
6 0 LOAD_CONST 0 (None)
2 RETURN_VALUE
An empty return is exactly the same as return None and functions without any explicit return
will always return None regardless of the operations, therefore
def sum(a, b):
global c
c = a + b
>>> None
li = []
for i in range(1, 10):
li.append(i)
[for i in in range(1, 10)]
def is_int(num):
if isinstance(num, int):
print('Yes')
else:
print('No')
What would be the result of is_int(2) and is_int(False)?
PEP8 is a list of coding conventions and style guidelines for Python
5 style guidelines:
1. Limit all lines to a maximum of 79 characters.
2. Surround top-level function and class definitions with two blank lines.
3. Use commas when making a tuple of one element
4. Use spaces (and not tabs) for indentation
5. Use 4 spaces per indentation level
assert does in Python?assert in non-test/production code?x = [1, 2, 3], what is the result of list(zip(x))?[(1,), (2,), (3,)]
list(zip(range(5), range(50), range(50)))
list(zip(range(5), range(50), range(-2)))
[(0, 0, 0), (1, 1, 1), (2, 2, 2), (3, 3, 3), (4, 4, 4)]
[]
def add(num1, num2):
return num1 + num2
def sub(num1, num2):
return num1 - num2
def mul(num1, num2):
return num1*num2
def div(num1, num2):
return num1 / num2
operators = {
'+': add,
'-': sub,
'*': mul,
'/': div
}
if __name__ == '__main__':
operator = str(input("Operator: "))
num1 = int(input("1st number: "))
num2 = int(input("2nd number: "))
print(operators[operator](num1, num2))
This is a good reference https://docs.python.org/3/library/datatypes.html
def wee(word):
return word
def oh(f):
return f + "Ohh"
>>> oh(wee("Wee"))
<<< Wee Ohh
This allows us to control the before execution of any given function and if we added another function as wrapper, (a function receiving another function that receives a function as parameter) we could also control the after execution.
Sometimes we want to control the before-after execution of many functions and it would get tedious to write
f = function(function_1())
f = function(function_1(function_2(*args)))
every time, that's what decorators do, they introduce syntax to write all of this on the go, using the keyword '@'.
These two decorators (ntimes and timer) are usually used to display decorators functionalities, you can find them in lots of
tutorials/reviews. I first saw these examples two years ago in pyData 2017. https://www.youtube.com/watch?v=7lmCu8wz8ro&t=3731s
Simple decorator:
def deco(f):
print(f"Hi I am the {f.__name__}() function!")
return f
@deco
def hello_world():
return "Hi, I'm in!"
a = hello_world()
print(a)
>>> Hi I am the hello_world() function!
Hi, I'm in!
This is the simplest decorator version, it basically saves us from writting a = deco(hello_world()).
But at this point we can only control the before execution, let's take on the after:
def deco(f):
def wrapper(*args, **kwargs):
print("Rick Sanchez!")
func = f(*args, **kwargs)
print("I'm in!")
return func
return wrapper
@deco
def f(word):
print(word)
a = f("************")
>>> Rick Sanchez!
************
I'm in!
deco receives a function -> f wrapper receives the arguments -> *args, **kwargs
wrapper returns the function plus the arguments -> f(*args, **kwargs) deco returns wrapper.
As you can see we conveniently do things before and after the execution of a given function.
For example, we could write a decorator that calculates the execution time of a function.
import time
def deco(f):
def wrapper(*args, **kwargs):
before = time.time()
func = f(*args, **kwargs)
after = time.time()
print(after-before)
return func
return wrapper
@deco
def f():
time.sleep(2)
print("************")
a = f()
>>> 2.0008859634399414
Or create a decorator that executes a function n times.
def n_times(n):
def wrapper(f):
def inner(*args, **kwargs):
for _ in range(n):
func = f(*args, **kwargs)
return func
return inner
return wrapper
@n_times(4)
def f():
print("************")
a = f()
>>>************
************
************
************
tail command in Python? Bonus: implement head as wellThis approach require from a human to always check why the value exceeded and how to handle it while today, it is more effective to notify people only when they need to take an actual action. If the issue doesn't require any human intervention, then the problem can be fixed by some processes running in the relevant environment.
Alerts Tickets Logging
Prometheus server responsible for scraping the storing the data
Push gateway is used for short-lived jobs
Alert manager is responsible for alerts ;)
git pull and git fetch?Shortly, git pull = git fetch + git merge
When you run git pull, it gets all the changes from the remote or central repository and attaches it to your corresponding branch in your local repository.
git fetch gets all the changes from the remote repository, stores the changes in a separate branch in your local repository
git directory, working directory and staging areaThe Git directory is where Git stores the meta data and object database for your project. This is the most important part of Git, and it is what is copied when you clone a repository from another computer.
The working directory is a single checkout of one version of the project. These files are pulled out of the compressed database in the Git directory and placed on disk for you to use or modify.
The staging area is a simple file, generally contained in your Git directory, that stores information about what will go into your next commit. It’s sometimes referred to as the index, but it’s becoming standard to refer to it as the staging area.
This answer taken from git-scm.com
First, you open the files which are in conflict and identify what are the conflicts. Next, based on what is accepted in your company or team, you either discuss with your colleagues on the conflicts or resolve them by yourself After resolving the conflicts, you add the files with `git add ` Finally, you run `git rebase --continue`
git reset and git revert?
git revert creates a new commit which undoes the changes from last commit.
git reset depends on the usage, can modify the index or change the commit which the branch head
is currently pointing at.
Using git rebase> command
git rebase?Mentioning two or three should be enough and it's probably good to mention that 'recursive' is the default one.
recursive resolve ours theirs
This page explains it the best: https://git-scm.com/docs/merge-strategies
git diff
git checkout HEAD~1 -- /path/of/the/file
.git directory? What can you find there?You delete a remote branch with this syntax:
git push origin :[branch_name]
gitattributes allow you to define attributes per pathname or path pattern.
You can use it for example to control endlines in files. In Windows and Unix based systems, you have different characters for new lines (\r\n and \n accordingly). So using gitattributes we can align it for both Windows and Unix with * text=auto in .gitattributes for anyone working with git. This is way, if you use the Git project in Windows you'll get \r\n and if you are using Unix or Linux, you'll get \n.
git checkout -- <file_name>
git reset HEAD~1 for removing last commit
If you would like to also discard the changes you `git reset --hard``
git rm False. If you would like to keep a file on your filesystem, use git reset <file_name>
Probably good to mention that it's:
This is a great article about Octopus merge: http://www.freblogg.com/2016/12/git-octopus-merge.html
Go also has good community.
var x int = 2 and x := 2?The result is the same, a variable with the value 2.
With var x int = 2 we are setting the variable type to integer while with x := 2 we are letting Go figure out by itself the type.
False. We can't redeclare variables but yes, we must used declared variables.
This should be answered based on your usage but some examples are:
func main() {
var x float32 = 13.5
var y int
y = x
}
package main
import "fmt"
func main() {
var x int = 101
var y string
y = string(x)
fmt.Println(y)
}
It looks what unicode value is set at 101 and uses it for converting the integer to a string.
If you want to get "101" you should use the package "strconv" and replace y = string(x) with y = strconv.Itoa(x)
package main
func main() {
var x = 2
var y = 3
const someConst = x + y
}
package main
import "fmt"
const (
x = iota
y = iota
)
const z = iota
func main() {
fmt.Printf("%v\n", x)
fmt.Printf("%v\n", y)
fmt.Printf("%v\n", z)
}
package main
import "fmt"
const (
_ = iota + 3
x
)
func main() {
fmt.Printf("%v\n", x)
}
The main difference is that SQL databases are structured (data is stored in the form of tables with rows and columns - like an excel spreadsheet table) while NoSQL is unstructured, and the data storage can vary depending on how the NoSQL DB is set up, such as key-value pair, document-oriented, etc.
db.books.find({"name": /abc/})db.books.find().sort({x:1})#!/bin/bashFew example:
You can have an entirely different answer. It's based only on your experience.
Depends on the language and settings used. When a script written in Bash fails to run a certain command it will keep running and will execute all other commands mentioned after the command which failed. Most of the time we would actually want the opposite to happen. In order to make Bash exist when a specific command fails, use 'set -e' in your script.
echo $0echo $?echo $$echo $@echo $#Answer depends on the language you are using for writing your scripts. If Bash is used for example then:
If Python, then using pdb is very useful.
Using the keyword read so for example read x will wait for user input and will store it in the variable x.
continue and break. When do you use them if at all?:(){ :|:& };:
A short way of using if/else. An example:
[[ $a = 1 ]] && b="yes, equal" || b="nope"
diff <(ls /tmp) <(ls /var/tmp)
| is not possible. It can be used when a command does not support STDIN or you need the output of multiple commands.
https://superuser.com/a/1060002/167769
Structured Query Language
The main difference is that SQL databases are structured (data is stored in the form of tables with rows and columns - like an excel spreadsheet table) while NoSQL is unstructured, and the data storage can vary depending on how the NoSQL DB is set up, such as key-value pair, document-oriented, etc.
ACID stands for Atomicity, Consistency, Isolation, Durability. In order to be ACID compliant, the database much meet each of the four criteria
Atomicity - When a change occurs to the database, it should either succeed or fail as a whole.
For example, if you were to update a table, the update should completely execute. If it only partially executes, the update is considered failed as a whole, and will not go through - the DB will revert back to it's original state before the update occurred. It should also be mentioned that Atomicity ensures that each transaction is completed as it's own stand alone "unit" - if any part fails, the whole statement fails.
Consistency - any change made to the database should bring it from one valid state into the next.
For example, if you make a change to the DB, it shouldn't corrupt it. Consistency is upheld by checks and constraints that are pre-defined in the DB. For example, if you tried to change a value from a string to an int when the column should be of datatype string, a consistent DB would not allow this transaction to go through, and the action would not be executed
Isolation - this ensures that a database will never be seen "mid-update" - as multiple transactions are running at the same time, it should still leave the DB in the same state as if the transactions were being run sequentially.
For example, let's say that 20 other people were making changes to the database at the same time. At the time you executed your query, 15 of the 20 changes had gone through, but 5 were still in progress. You should only see the 15 changes that had completed - you wouldn't see the database mid-update as the change goes through.
Durability - Once a change is committed, it will remain committed regardless of what happens (power failure, system crash, etc.). This means that all completed transactions must be recorded in non-volatile memory.
Note that SQL is by nature ACID compliant. Certain NoSQL DB's can be ACID compliant depending on how they operate, but as a general rule of thumb, NoSQL DB's are not considered ACID compliant
SQL - Best used when data integrity is crucial. SQL is typically implemented with many businesses and areas within the finance field due to it's ACID compliance.
NoSQL - Great if you need to scale things quickly. NoSQL was designed with web applications in mind, so it works great if you need to quickly spread the same information around to multiple servers
Additionally, since NoSQL does not adhere to the strict table with columns and rows structure that Relational Databases require, you can store different data types together.
A Cartesian product is when all rows from the first table are joined to all rows in the second table. This can be done implicitly by not defining a key to join, or explicitly by calling a CROSS JOIN on two tables, such as below:
Select * from customers CROSS JOIN orders;
Note that a Cartesian product can also be a bad thing - when performing a join on two tables in which both do not have unique keys, this could cause the returned information to be incorrect.
For these questions, we will be using the Customers and Orders tables shown below:
Customers
| Customer_ID | Customer_Name | Items_in_cart | Cash_spent_to_Date |
|---|---|---|---|
| 100204 | John Smith | 0 | 20.00 |
| 100205 | Jane Smith | 3 | 40.00 |
| 100206 | Bobby Frank | 1 | 100.20 |
ORDERS
| Customer_ID | Order_ID | Item | Price | Date_sold |
|---|---|---|---|---|
| 100206 | A123 | Rubber Ducky | 2.20 | 2019-09-18 |
| 100206 | A123 | Bubble Bath | 8.00 | 2019-09-18 |
| 100206 | Q987 | 80-Pack TP | 90.00 | 2019-09-20 |
| 100205 | Z001 | Cat Food - Tuna Fish | 10.00 | 2019-08-05 |
| 100205 | Z001 | Cat Food - Chicken | 10.00 | 2019-08-05 |
| 100205 | Z001 | Cat Food - Beef | 10.00 | 2019-08-05 |
| 100205 | Z001 | Cat Food - Kitty quesadilla | 10.00 | 2019-08-05 |
| 100204 | X202 | Coffee | 20.00 | 2019-04-29 |
Select *
From Customers;
Select Items_in_cart
From Customers
Where Customer_Name = "John Smith";
Select SUM(Cash_spent_to_Date) as SUM_CASH
From Customers;
Select count(1) as Number_of_People_w_items
From Customers
where Items_in_cart > 0;
You would join them on the unique key. In this case, the unique key is Customer_ID in both the Customers table and Orders table
Select c.Customer_Name, o.Item
From Customers c
Left Join Orders o
On c.Customer_ID = o.Customer_ID;
with cat_food as (
Select Customer_ID, SUM(Price) as TOTAL_PRICE
From Orders
Where Item like "%Cat Food%"
Group by Customer_ID
)
Select Customer_name, TOTAL_PRICE
From Customers c
Inner JOIN cat_food f
ON c.Customer_ID = f.Customer_ID
where c.Customer_ID in (Select Customer_ID from cat_food);
Although this was a simple statement, the "with" clause really shines when a complex query needs to be run on a table before joining to another. With statements are nice, because you create a pseudo temp when running your query, instead of creating a whole new table.
The Sum of all the purchases of cat food weren't readily available, so we used a with statement to create the pseudo table to retrieve the sum of the prices spent by each customer, then join the table normally.
It's a monitoring service that provides threat protection across all of the services in Azure. More specifically, it:
Azure AD is a cloud-based identity service. You can use it as a standalone service or integrate it with existing Active Directory service you already running.
Authentication is the process of identifying whether a service or a person is who they claim to be. Authorization is the process of identifying what level of access the service or the person have (after authentication was done)
The Elastic Stack consists of:
The most used projects are the Elasticserach, Logstash and Kibana. Also known as the ELK stack.
From the official docs:
"Elasticsearch is a distributed document store. Instead of storing information as rows of columnar data, Elasticsearch stores complex data structures that have been serialized as JSON documents"
Index in Elastic is in most cases compared to a whole database from the SQL/NoSQL world.
You can choose to have one index to hold all the data of your app or have multiple indices where each index holds different type of your app (e.g. index for each service your app is running).
The official docs also offer a great explanation (in general, it's really good documentation, as every project should have):
"An index can be thought of as an optimized collection of documents and each document is a collection of fields, which are the key-value pairs that contain your data"
From the official docs:
"An inverted index lists every unique word that appears in any document and identifies all of the documents each word occurs in."
Continuing with the comparison to SQL/NoSQL a Document in Elastic is a row in table in the case of SQL or a document in a collection in the case of NoSQL. As in NoSQL a Document is a JSON object which holds data on a unit in your app. What is this unit depends on the your app. If your app related to book then each document describes a book. If you are app is about shirts then each document is a shirt.
False.
From the official docs:
"Each indexed field has a dedicated, optimized data structure. For example, text fields are stored in inverted indices, and numeric and geo fields are stored in BKD trees."
An index is split into shards and documents are hashed to a particular shard. Each shard may be on a different node in a cluster and each one of the shards is a self contained index.
This allows Elasticsearch to scale to an entire cluster of servers.
Term Frequency is how often a term appears in a given document and Document Frequency is how often a term appears in all documents. They both are used for determining the relevance of a term by calculating Term Frequency / Document Frequency.
From the official docs:
"In the query context, a query clause answers the question “How well does this document match this query clause?” Besides deciding whether or not the document matches, the query clause also calculates a relevance score in the _score meta-field."
"In a filter context, a query clause answers the question “Does this document match this query clause?” The answer is a simple Yes or No — no scores are calculated. Filter context is mostly used for filtering structured data"
From the official docs:
"Kibana is an open source analytics and visualization platform designed to work with Elasticsearch. You use Kibana to search, view, and interact with data stored in Elasticsearch indices. You can easily perform advanced data analysis and visualize your data in a variety of charts, tables, and maps."
There are several possible answers for this question. One of them is as follows:
A small-scale architecture of elastic will consist of the elastic stack as it is. This means we will have beats, logstash, elastcsearch and kibana.
A production environment with large amounts of data can include some kind of buffering component (e.g. Reddis or RabbitMQ) and also security component such as Nginx.
DNS (Domain Name Systems) is a protocol used for converting domain names into IP addresses.
As you know computer networking is done with IP addresses (layer 3 of the OSI model) but for as humans it's hard to remember IP addresses, it's much easier to remember names. This why we need something such as DNS to convert any domain name we type into an IP address. You can think on DNS as a huge phonebook or database where each corresponding name has an IP.
In general the process is as follows:
While an A record points a domain name to an IP address, a PTR record does the opposite and resolves the IP address to a domain name.
According to Martin Kleppmann:
"Many processes running on many machines...only message-passing via an unreliable network with variable delays, and the system may suffer from partial failures, unreliable clocks, and process pauses."
According to the CAP theorem, it's not possible for a distributed data store to provide more than two of the following at the same time:
It's an architecture in which data is and retrieved from a single, non-shared, source usually exclusively connected to one node as opposed to architectures where the request can get to one of many nodes and the data will be retrieved from one shared location (storage, memory, ...).
False. Server doesn't maintain state for incoming request.
It consits of:
HTTP is stateless. To share state, we can use Cookies.
TODO: explain what is actually a Cookie
SSH HTTP DHCP DNS ...
Although the following questions are not DevOps related, they are still quite common and part of the DevOps interview process so it's better to prepare for them as well.
Tell them how did you hear about them :D Relax, there is no wrong or right answer here...I think.
Some ideas (some of them bad and should not be used):
If you worked in this area for more than 5 years it's hard to imagine the answer would be no. It also doesn't have to be big service outage. Maybe you merged some code that broke a project or its tests. Simply focus on what you learned from such experience.
You know best your order just have a good thought if you really want to put salary in top or bottom....
Bad answer: I don't. Better answer: Every person has strengths and weaknesses. This is true also for colleagues I don't have good work relationship with and this is what helps me to create good work relationship with them. If I am able to highlight or recognize their strengths I'm able to focus mainly on that when communicating with them.
You know the best, but some ideas if you find it hard to express yourself:
You know the best :)
You can use and elaborate on one or all of the following:
A list of questions you as a candidate can ask the interviewer during or after the interview. These are only a suggestion, use them carefully. Not every interviewer will be able to answer these (or happy to) which should be perhaps a red flag warning for your regarding working in such place but that's really up to you.
Be careful when asking this question - all companies, regardless of size, have some level of tech debt.
Phrase the question in the light that all companies have the deal with this, but you want to see the current
pain points they are dealing with
This is a great way to figure how managers deal with unplanned work, and how good they are at setting expectations with projects.
This can give you insights in some of the cool projects a company is working on, and if you would enjoy working on projects like these. This is also a good way to see if the managers are allowing employees to learn and grow with projects outside of the normal work you'd do.
Similar to the tech debt question, this helps you identify any pain points with the company.
Additionally, it can be a great way to show how you'd be an asset to the team.
For Example, if they mention they have problem X, and you've solved that in the past, you can show how you'd be able to mitigate that problem.
Not only this will tell you what is expected from you, it will also provide big hint on the type of work you are going to do in the first months of your job.
Connection Pool is a cache of database connections and the reason it's used is to avoid an overhead of establishing a connection for every query done to a database.
A connection leak is a situation where database connection isn't closed after being created and is no longer needed.
"A data warehouse is a subject-oriented, integrated, time-variant and non-volatile collection of data in support of organisation's decision-making process"
A single data source (at least usually) which is stored in a raw format.
Vertical Scaling is the process of adding resources to increase power of existing servers. For example, adding more CPUs, adding more RAM, etc.
Horizontal Scaling is the process of adding more resources that will be able handle requests as one unit
The load on the producers or consumers may be high which will then cause them to hang or crash.
Instead of working in "push mode", the consumers can pull tasks only when they are ready to handle them. It can be fixed by using a streaming platform like Kafka, Kinesis, etc. This platform will make sure to handle the high load/traffic and pass tasks/messages to consumers only when the ready to get them.
You can mention:
roll-back & roll-forward cut over dress rehearsals DNS redirection
Exercises are all about:
Below you can find several exercises
Thanks to all of our amazing contributors who make it easy for everyone to learn and prepare to their interviews.
Logos credits can be found here