profile links, images, posts

This commit is contained in:
2020-09-23 21:41:24 -04:00
parent 09a7b14bdc
commit fadba17057
21 changed files with 1206 additions and 31 deletions

View File

@@ -94,14 +94,20 @@ analytics:
# Site Author # Site Author
author: author:
name : "Ray Lyon" name : "Ray Lyon"
avatar : # path of avatar image, e.g. "/assets/images/bio-photo.jpg" avatar : "/assets/images/avatar.jpg"
bio : "I am an **amazing** person." bio : "I am an **amazing** person."
location : "New York, NY" location : "New York, NY"
email : email :
links: links:
- label: "E201 06CB 86FE 0B4D"
icon: "fas fa-fingerprint"
url: "https://keybase.io/scubasteve/pgp_keys.asc?fingerprint=2dc3a1066bba7040fe7963d9e20106cb86fe0b4d"
- label: "Email" - label: "Email"
icon: "fas fa-fw fa-envelope-square" icon: "fas fa-fw fa-envelope-square"
url: "mailto:ray@raylyon.net" url: "mailto:ray@raylyon.net"
- label: "Keybase"
icon: "fab fa-keybase"
url: "https://keybase.io/scubasteve"
- label: "Website" - label: "Website"
icon: "fas fa-fw fa-link" icon: "fas fa-fw fa-link"
# url: "https://your-website.com" # url: "https://your-website.com"
@@ -116,7 +122,7 @@ author:
url: "https://github.com/skoobasteeve" url: "https://github.com/skoobasteeve"
- label: "Instagram" - label: "Instagram"
icon: "fab fa-fw fa-instagram" icon: "fab fa-fw fa-instagram"
# url: "https://instagram.com/" # url: "https://instagram.com/theraylyon"
# Site Footer # Site Footer
footer: footer:

View File

@@ -0,0 +1,380 @@
---
layout: single
title: "Setting up a ZFS-backed KVM Hypervisor on Ubuntu 18.04"
date: 2019-03-28 22:45:00
categories: [Linux Administration]
tags: linux kvm zfs virtualization vm ubuntu
comments: true
---
The ability to run several virtual servers in a single physical box is what makes it possible for businesses of all sizes to run complex applications with minimal cost and power usage. Most importantly, (arguable) it's what enables people like me to learn and practice near real-world Linux administration in a one-bedroom Brooklyn apartment.
The key to getting this working is a fast, reliable hypervisor. In this tutorial, we're going to be setting up our virtual environment with KVM/QEMU as the hypervisor using ZFS for our VM storage.
### The Tools
* [Ubuntu 18.04 Server](https://www.ubuntu.com/download/server) - Our host OS, chosen due to its out-of-the-box support for the ZFS filesystem. I'm assuming you already have Ubuntu installed and have SSH access to the system.
* [ZFS Filesystem](https://wiki.ubuntu.com/ZFS) - There are many on the web who can speak to the benefits of ZFS better than I can, but essentially ZFS is a ultra-reliable filesystem and logical volume manager with built-in software RAID, checksums, compression, and de-duplicaton. On top of that, it's very straightforward to configure and manage.
* [KVM/QEMU](https://www.linux-kvm.org/page/Main_Page) - Industry-standard, open-source, Linux-based VM hypervisor. Manageable from the CLI or any number of GUI applications (i.e. [virt-manager](https://virt-manager.org/))
### Sections in this Tutorial
1. Install & Configure ZFS
2. Install & Configure KVM/QEMU
3. Create a VM
4. Managing KVM via CLI
5. Managing KVM with virt-manager
5. Concluding thoughts
### Step 1: Install & Configure ZFS
Below are the steps I used to configure a ZFS storage pool on my PowerEdge T30. On that system, I'm using (3) 4TB drives in a RAIDz1 array, allowing for one disk failure and providing more than adequate I/O performance. The RAID type you choose depends on a lot of factors, and I suggest you read one of the many articles about the benefits/drawbacks of the different types ([Wikipedia](https://en.wikipedia.org/wiki/ZFS#RAID_(%22RaidZ%22))). For the purpose of this guide, I'll be using x3 4GB virtual disks in RAIDz1.
First, update your repos and install the ZFS packages:
``` bash
$ sudo apt update
```
``` bash
$ sudo apt install zfsutils-linux
```
\\
Assuming you didn't get an error message, ZFS should now be installed on your machine.
Next, we need to get the names of the drives we're going to use:
``` bash
$ sudo fdisk -l
```
\\
You should see an output similar to this:
``` bash
Disk /dev/vda: 25 GiB, 26843545600 bytes, 52428800 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 07BCD546-2A83-4F37-822F-C7E7B7B8811A
Device Start End Sectors Size Type
/dev/vda1 2048 4095 2048 1M BIOS boot
/dev/vda2 4096 52426751 52422656 25G Linux filesystem
Disk /dev/vdb: 4 GiB, 4294967296 bytes, 8388608 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/vdc: 4 GiB, 4294967296 bytes, 8388608 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/vdd: 4 GiB, 4294967296 bytes, 8388608 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
```
\\
You can see above that `/dev/vda` is my OS drive and `/dev/vdb /dev/vdc /dev/vdd` are the disks we'll use for our ZFS RAID. Make note of these names.
Now we'll create our RAIDz1 pool using the above disks, with `zfs-pool` as the pool name.
```` bash
$ sudo zpool create zfs-pool raidz1 vdb vdc vdd
````
\\
If you receive no error message, you've successfully created your pool. Now, check the status of your pool.
```` bash
$ sudo zpool status
````
```` bash
pool: zfs-pool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
zfs-pool ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
vdb ONLINE 0 0 0
vdc ONLINE 0 0 0
vdd ONLINE 0 0 0
errors: No known data errors
````
\\
The newly created pool should be already mounted at the root of your system as `/zfs-pool`.
### Step 2: Install & Configure KVM/QEMU
The below command will install all the components we need to get KVM up and running.
```` bash
$ sudo apt install qemu-kvm libvirt-clients libvirt-daemon-system bridge-utils virt-manager
````
\\
Next, we'll configure the network bridge. This will allow your VMs to get a routable IP address on your local network.
As of 18.04, Ubuntu deprecated `/etc/network/interfaces` and now encourages the use of a `yaml` file located at `/etc/netplan`. We're going to be editing this file to enable the bridge.
First, find the name of your primary network interface using the `ip a` command.
```` bash
$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:6e:a9:d6 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.80/24 brd 10.0.0.255 scope global ens3
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe6e:a9d6/64 scope link
valid_lft forever preferred_lft forever
````
\\
We can see above that our main interface is `ens3`. Make note of this as we edit the files in `/etc/netplan`.
Ubuntu has kindly created a default file for us at `/etc/netplan/50-cloud-init.yml'. Open this file in your favorite text editor and set the following parameters. (I've added comments next to the changes)
```` bash
$ sudo nano /etc/netplan/50-cloud-init.yml
````
```` yaml
# This file is generated from information provided by
# the datasource. Changes to it will not persist across an instance.
# To disable cloud-init's network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
version: 2
ethernets:
ens9: #Your interface name
dhcp4: no #Indicates that we're setting a static IP address
dhcp6: no
bridges:
br0:
interfaces: [ens9] #Your interface that the bridge will share
dhcp4: no
addresses: [10.0.0.10/24] #Static IP of your server followed by netmask
gateway4: 10.0.0.251 #Your gateway address (usually your router)
nameservers:
addresses: [8.8.8.8, 8.8.4.4] #DNS servers
````
\\
Save your file and exit the text editor.
Apply your changes:
```` bash
$ sudo netplan apply
````
\\
Confirm your configuration.
```` bash
$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br0 state UP group default qlen 1000
link/ether 54:bf:64:92:0b:74 brd ff:ff:ff:ff:ff:ff
3: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 82:56:88:0d:dc:fb brd ff:ff:ff:ff:ff:ff
inet 10.0.0.10/24 brd 10.0.0.255 scope global br0
valid_lft forever preferred_lft forever
inet6 fe80::8056:88ff:fe0d:dcfb/64 scope link
valid_lft forever preferred_lft forever
````
\\
Your new VMs will now be reachable within your network.
### Step 3: Create a VM
New virtual machines can be created and managed in the command line via the `virt-install` and `virsh` commmands. I recommend reading the man pages to get an idea of their full capabilites, but for now we'll use them to create our first Centos7 VM.
Download the latest Centos7 ISO file to your home directory using `wget`.
```` bash
$ wget http://isoredirect.centos.org/centos/7/isos/x86_64/CentOS-7-x86_64-DVD-1810.iso
````
\\
The ISO file will download and notify you on completion. Once finished, create a folder called `images` and put our newly downloaded ISO there. You'll likely be downloading a lot of these and your home directory will get cluttered quickly, so a little organization now will help you in the long run.
```` bash
$ mkdir ~/images
````
```` bash
$ mv ~/CentOS-7-x86_64-DVD-1810.iso ~/images/CentOS-7-x86_64-DVD-1810.iso
````
While we're on the subject, let's create a folder on our shiny new ZFS volume where we'll store our VMs.
```` bash
$ mkdir /zfs-pool/vm
````
Now we're going to create our virtual machine in our ZFS volume using the newly downloaded ISO.
```` bash
$ sudo virt-install --name=vm-centos7-01 --vcpus=1 --memory=1024 --cdrom=/home/[username]/images/CentOS-7-x86_64-DVD-1810.iso --disk path=/zfs-pool/vm-centos-01,size=25 --os-variant=centos7.0 --graphics vnc,listen=0.0.0.0 --noautoconsole
````
I'll break down each one of these parameters:
````
--name #Name of our new VM.
--vcpus #Number of virtual CPU cores we're assigning to our VM
--memory #Amount of RAM in MB being assigned
--cdrom #Path to ISO
--disk path #Desired location of VM. Be sure to choose the path of your ZFS filesystem.
--disk size #Size of the virtual hard drive
--os-variant #Helps KVM optimize performance for your guest OS.
--graphics #Serves the installer over VNC so we can complete the graphical steps
--listen #Serves VNC on localhost
--noautoconsole #virt-install won't attempt to launch any graphical console outside VNC
````
\\
Run the above command to begin creating the VM. It should give you the below result before returning you to a prompt.
````
Starting install...
Domain installation still in progress. You can reconnect to
the console to complete the installation process.
````
\\
We need to find out which port VNC is broadcasting on so we can complete the graphical Centos7 installer over VNC. To do this, run the following command.
```` bash
$ virsh vncdisplay vm-centos7-01
:4
````
In this case, the command returned `4`, which means our VNC is running in port 5904.
From a computer on the network, open your favorite VNC client and connect to your hypervisor using its IP and port 5904. In our case it would be `10.0.0.10:5904`.
![vnc-01](/assets/images/screenshots/vnc01.png){:class="img-responsive"}
\\
\\
\\
![vnc-02](/assets/images/screenshots/vnc02.png){:class="img-responsive"}
\\
\\
From here you can complete the OS installation and set a static IP address. Further management can then be done over SSH.
### Step 4: Managing KVM Through the CLI
Below are some useful commands for managing the state of your virtual machine from the hypervisor's terminal. For a complete list of commands and options, I suggest you run `man virsh`.
#### List Virtual Machines
```` bash
$ virsh list --all
````
#### Shutdown & Start Guest
```` bash
$ virsh shutdown vm-centos-01
````
```` bash
$ virsh start vm-centos-01
````
```` bash
$ virsh reboot vm-centos-01
````
#### Autostart VM with Hypervisor
```` bash
$ sudo virsh autostart vm-centos-01
````
### Step 5: Managing KVM with Virt-Manager
For those of you who want to avoid the command line, there's a wonderful GUI tool called `virt-manager`, and it's available in nearly every distro's package manager. Let's install it now on our Ubuntu Desktop machine.
```` bash
$ sudo apt install virt-manager
````
\\
In order to connect to our KVM hypervisor, we'll have generate SSH keys on our desktop machine and copy them to the hypervisor. If you don't already have SSH keys generated, open a terminal and generate an SSH key pair by running the command below and following the prompts.
```` bash
$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/$USER/.ssh/id_rsa):
Created directory '/home/$USER/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/$USER/.ssh/id_rsa.
Your public key has been saved in /home/$USER/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:D6wWmAEgpwxFeErFbGXs93fC53XSWPyycMXirrnzuPk $USER@ubuntu-1804
The key's randomart image is:
+---[RSA 2048]----+
|+=O.oo |
|=+.=.. |
|o+. o o |
|. = o . =|
| o o S . . *.|
| o + +.o*.+|
| o o =+.+.|
| . .=o |
| BBE |
+----[SHA256]-----+
````
\\
Next, we're going to send our SSH public key to the hypervisor by piping it through SSH. Subsitute the IP address in the below command for that of your hypervisor. You'll be asked to enter the password for the server's user account.
```` bash
cat ~/.ssh/id_rsa.pub | ssh username@10.0.0.10 "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"
````
\\
The above command fetches your generated public key, connects to your hypervisor with SSH, creates the `~/.ssh` directory, and pastes the key in the `~/.ssh/authorized_keys` file.
Once your key is copied, you can open `virt-manager` and get connected. Open the application and select `File-> Add Connection`.
\\
![virtmgr-01](/assets/images/screenshots/virtmgr-01.png){:class="img-responsive"}
\\
\\
Check both boxes and enter the username and hostname/IP address of the hypervisor, then click OK.
\\
![virtmgr-02](/assets/images/screenshots/virtmgr-02.png){:class="img-responsive"}
\\
\\
If everything went right, you should see your VM and any others you've created in the window. Double clicking on a VM will open a console and allow you to control the system, change settings, take snapshots, and just about anything else you'd want to do with your VM.
\\
![virtmgr-03](/assets/images/screenshots/virtmgr-03.png){:class="img-responsive"}
\\
\\
You can even create entirely new VMs with `virt-manager`.
\\
![virtmgr-04](/assets/images/screenshots/virtmgr-04.png){:class="img-responsive"}
\\
### Concluding Thoughts
You should now have a fully functioning KVM hypervisor running on top of the ZFS filesystem! As long as you set the location of your VMs to the ZFS pool, all your systems will benefit from the reliability and robust featureset of ZFS.
This is meant to be a very base-level configuration, so you should do your research and implement stronger security and some sort of backup solution. I may be covering these in future posts, but in the mean time I'll link some helpful resources below. As always, please reach out to me via email or in the comments below if you have questions, thoughts, or criticisms.
Thanks for reading and happy hacking!
### Helpful Resources
[KVM Cheatsheet](https://blog.programster.org/kvm-cheatsheet) - Programster
[ZFS command line reference](https://www.thegeekdiary.com/solaris-zfs-command-line-reference-cheat-sheet/) - The Geek Diary

View File

@@ -0,0 +1,184 @@
---
layout: single
title: "Resize a Centos 7 Virtual Machine in KVM/QEMU"
date: 2019-05-09 22:45:00
categories: [Linux Administration]
tags: linux kvm zfs virtualization vm ubuntu
comments: true
---
Being relatively new to the RHEL/Centos world, it's safe to say I'm learning a lot as I go. While Linux is mostly, well, *Linux* between distributions, each one has its own particular nuances.
One of these nuances bit me last weekend on a new Centos 7 VM. I was spinning up a [borg-backup](https://www.borgbackup.org/) server to back up my roughly 50GB [Nextcloud](https://nextcloud.com/) instance, so I provisioned a 160GB qcow2 image to give it adequate wiggle room. After logging in for the first time post-install, I was dismayed to see only 100GB available for backup. It turns out that Centos 7 default paritioning included separate `/` and `/home` partitions, and allocated a whole 50GB for root. What good is 50GB going to do me when all I'm instaling is borg?
One additional differentiator of RHEL/Centos from Ubuntu is the use of [XFS](https://wiki.archlinux.org/index.php/XFS) as the default filesystem. XFS is a solid, feature-packed filesystem, but one thing it can't do is shrink. This ruled out shrinking `/` and expanding `/home`, so I decided to eat my mistake and just expand the whole VM. Below is the process for future reference.
## Extend the QCOW2 Image
Shut down your VM
``` bash
$ sudo poweroff
```
\\
SSH into your host machine and run the `qemu-img` tool on your VM guest image. I added an additional 100GB in the example below.
``` bash
$ sudo qemu-img resize /path/to/image.qcow2 +100G
Image resized.
```
## Extend the Partition
Since resizing root LVM paritions can be difficult from a running system, we're going to boot into a live environment for the remaining steps. You can use any Linux live environment for this, but I'll be using [GParted](https://gparted.org/) to keep things simple.
Open up virt-manager and load the GParted ISO into the VM's optical drive, then click Apply.
![resize01](/assets/images/screenshots/resize01.png){:class="img-responsive"}
\\
Select Boot Options and enable the optical drive before moving it to first in the boot order, then click Apply.
![resize02](/assets/images/screenshots/resize02.png){:class="img-responsive"}
\\
Start your VM and follow the prompts to boot into the GParted live environment. You should notice the extra space you added as "unallocated".
![gparted01](/assets/images/screenshots/gparted01.png){:class="img-responsive"}
\\
Select the LVM parition and click on Resize/Move.
![gparted02](/assets/images/screenshots/gparted02.png){:class="img-responsive"}
\\
Drag the slider to fill in the remaining space and click Resize.
![gparted03](/assets/images/screenshots/gparted03.png){:class="img-responsive"}
\\
Click Apply.
![gparted04](/assets/images/screenshots/gparted04.png){:class="img-responsive"}
![gparted05](/assets/images/screenshots/gparted05.png){:class="img-responsive"}
## Extend the Logical Volume
Close/minimize GParted and open the Terminal in the live environment.
![gparted06](/assets/images/screenshots/gparted06.png){:class="img-responsive"}
\\
We have to help the live environment disover the logical volumes. You'll be able to use `sudo` without a password in all the following commands.
``` bash
$ sudo pvscan
PV /dev/vda2 VG centos lvm2 [<259.00 GiB / 100.00 GiB free]
Total: 1 [<259.00 GiB] / in use: 1 [<259.00 GiB] / in no VG: 0 [0 ]
```
``` bash
$ sudo vgscan
Reading volume groups from cache.
Found volume group "centos" using metadata type lvm2
```
``` bash
$ sudo vgdisplay
--- Volume group ---
VG Name centos
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 5
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size <259.00 GiB
PE Size 4.00 MiB
Total PE 66303
Alloc PE / Size 40702 / 158.99 GiB
Free PE / Size 25601 / 100.00 GiB
VG UUID r8vEjh-ZHBf-WWfi-lBSb-xfSd-HiTy-02TPMe
```
\\
You can see from the above information that our Volume Group is resizable and has 100GB available for expansion. Take note of the number after "Free PE / Size".
\\
Let's confirm our available volumes.
``` bash
$ sudo lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
home centos -wi-a----- 106.99g
root centos -wi-a----- 50.00g
swap centos -wi-a----- 2.00g
```
\\
Now we'll use the `lvextend` tool tool to expand our `/home` volume. The number noted earlier from the `vgdisplay` command will be used to signify the size increase.
```bash
$ sudo lvextend -l +25601 /dev/centos/home
Size of logical volume centos/home changed from 106.99 GiB (27390 extents) to <207.00 GiB (52991 extents).
Logical volume centos/home successfully resized.
```
## Expand the Filesystem
Since Centos 7 defaults to XFS, we'll use the `xfs_growfs` tool. This requires the filesystem to be mounted first, so lets create a mount point and mount it there.
```bash
$ sudo mkdir /mnt/home
```
```bash
$ sudo mount /dev/centos/home /mnt/home
```
\\
Now run `xfs_growfs` to expand the filesystem into the available volume space.
```bash
$ sudo xfs_growfs /mnt/home
meta-data=/dev/mapper/centos-home isize=512 agcount=4, agsize=7011840 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0 spinodes=0 rmapbt=0
= reflink=0
data = bsize=4096 blocks=28047360, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal bsize=4096 blocks=13695, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 28047360 to 54262784
```
\\
Confirm the new volume size.
```bash
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 489M 0 489M 0% /dev
/dev/sr0 309M 309M 0 100% /run/live/medium
/dev/loop0 269M 269M 0 100% /run/live/rootfs/filesystem.squashfs
/dev/mapper/centos-home 207G 44G 164G 21% /mnt/home
```
You can see that your `/home` volume is now '207GB' total in size.
Power off your VM and be sure to remove the optical drive from the boot list in `virt-manager` before booting into your Centos 7 install.
\\
That's it! Please let me know via email or comment if you have a more efficient way to accomplish this. Going forward, the best thing to do is to set the desired paritioning at install time.
Thanks for reading and happy hacking!

View File

@@ -1,29 +0,0 @@
---
layout: single
title: "Welcome to Jekyll!"
date: 2020-09-23 18:38:44 -0400
categories: jekyll update
---
Youll find this post in your `_posts` directory. Go ahead and edit it and re-build the site to see your changes. You can rebuild the site in many different ways, but the most common way is to run `jekyll serve`, which launches a web server and auto-regenerates your site when a file is updated.
Jekyll requires blog post files to be named according to the following format:
`YEAR-MONTH-DAY-title.MARKUP`
Where `YEAR` is a four-digit number, `MONTH` and `DAY` are both two-digit numbers, and `MARKUP` is the file extension representing the format used in the file. After that, include the necessary front matter. Take a look at the source for this post to get an idea about how it works.
Jekyll also offers powerful support for code snippets:
{% highlight ruby %}
def print_hi(name)
puts "Hi, #{name}"
end
print_hi('Tom')
#=> prints 'Hi, Tom' to STDOUT.
{% endhighlight %}
Check out the [Jekyll docs][jekyll-docs] for more info on how to get the most out of Jekyll. File all bugs/feature requests at [Jekylls GitHub repo][jekyll-gh]. If you have questions, you can ask them on [Jekyll Talk][jekyll-talk].
[jekyll-docs]: https://jekyllrb.com/docs/home
[jekyll-gh]: https://github.com/jekyll/jekyll
[jekyll-talk]: https://talk.jekyllrb.com/

View File

@@ -0,0 +1,40 @@
/*!
* Minimal Mistakes Jekyll Theme 4.20.2 by Michael Rose
* Copyright 2013-2020 Michael Rose - mademistakes.com | @mmistakes
* Licensed under MIT (https://github.com/mmistakes/minimal-mistakes/blob/master/LICENSE)
*/
/* Variables */
@import "minimal-mistakes/variables";
/* Mixins and functions */
@import "minimal-mistakes/vendor/breakpoint/breakpoint";
@include breakpoint-set("to ems", true);
@import "minimal-mistakes/vendor/magnific-popup/magnific-popup"; // Magnific Popup
@import "minimal-mistakes/vendor/susy/susy";
@import "minimal-mistakes/mixins";
/* Core CSS */
@import "minimal-mistakes/reset";
@import "minimal-mistakes/base";
@import "minimal-mistakes/forms";
@import "minimal-mistakes/tables";
@import "minimal-mistakes/animations";
/* Components */
@import "minimal-mistakes/buttons";
@import "minimal-mistakes/notices";
@import "minimal-mistakes/masthead";
@import "minimal-mistakes/navigation";
@import "minimal-mistakes/footer";
@import "minimal-mistakes/search";
@import "minimal-mistakes/syntax";
/* Utility classes */
@import "minimal-mistakes/utilities";
/* Layout specific */
@import "minimal-mistakes/page";
@import "minimal-mistakes/archive";
@import "minimal-mistakes/sidebar";
@import "minimal-mistakes/print";

View File

@@ -0,0 +1,594 @@
/* ==========================================================================
UTILITY CLASSES
========================================================================== */
/*
Visibility
========================================================================== */
/* http://www.456bereastreet.com/archive/200711/screen_readers_sometimes_ignore_displaynone/ */
.hidden,
.is--hidden {
display: none;
visibility: hidden;
}
/* for preloading images */
.load {
display: none;
}
.transparent {
opacity: 0;
}
/* https://developer.yahoo.com/blogs/ydn/clip-hidden-content-better-accessibility-53456.html */
.visually-hidden,
.screen-reader-text,
.screen-reader-text span,
.screen-reader-shortcut {
position: absolute !important;
clip: rect(1px, 1px, 1px, 1px);
height: 1px !important;
width: 1px !important;
border: 0 !important;
overflow: hidden;
}
body:hover .visually-hidden a,
body:hover .visually-hidden input,
body:hover .visually-hidden button {
display: none !important;
}
/* screen readers */
.screen-reader-text:focus,
.screen-reader-shortcut:focus {
clip: auto !important;
height: auto !important;
width: auto !important;
display: block;
font-size: 1em;
font-weight: bold;
padding: 15px 23px 14px;
background: #fff;
z-index: 100000;
text-decoration: none;
box-shadow: 0 0 2px 2px rgba(0, 0, 0, 0.6);
}
/*
Skip links
========================================================================== */
.skip-link {
position: fixed;
z-index: 20;
margin: 0;
font-family: $sans-serif;
white-space: nowrap;
}
.skip-link li {
height: 0;
width: 0;
list-style: none;
}
/*
Type
========================================================================== */
.text-left {
text-align: left;
}
.text-center {
text-align: center;
}
.text-right {
text-align: right;
}
.text-justify {
text-align: justify;
}
.text-nowrap {
white-space: nowrap;
}
/*
Task lists
========================================================================== */
.task-list {
padding:0;
li {
list-style-type: none;
}
.task-list-item-checkbox {
margin-right: 0.5em;
opacity: 1;
}
}
.task-list .task-list {
margin-left: 1em;
}
/*
Alignment
========================================================================== */
/* clearfix */
.cf {
clear: both;
}
.wrapper {
margin-left: auto;
margin-right: auto;
width: 100%;
}
/*
Images
========================================================================== */
/* image align left */
.align-left {
display: block;
margin-left: auto;
margin-right: auto;
@include breakpoint($small) {
float: left;
margin-right: 1em;
}
}
/* image align right */
.align-right {
display: block;
margin-left: auto;
margin-right: auto;
@include breakpoint($small) {
float: right;
margin-left: 1em;
}
}
/* image align center */
.align-center {
display: block;
margin-left: auto;
margin-right: auto;
}
/* file page content container */
.full {
@include breakpoint($large) {
margin-right: -1 * span(2.5 of 12) !important;
}
}
/*
Icons
========================================================================== */
.icon {
display: inline-block;
fill: currentColor;
width: 1em;
height: 1.1em;
line-height: 1;
position: relative;
top: -0.1em;
vertical-align: middle;
}
/* social icons*/
.social-icons {
.fas,
.fab,
.far,
.fal {
color: $text-color;
}
.fa-behance,
.fa-behance-square {
color: $behance-color;
}
.fa-bitbucket {
color: $bitbucket-color;
}
.fa-dribbble,
.fa-dribble-square {
color: $dribbble-color;
}
.fa-facebook,
.fa-facebook-square,
.fa-facebook-f {
color: $facebook-color;
}
.fa-flickr {
color: $flickr-color;
}
.fa-foursquare {
color: $foursquare-color;
}
.fa-github,
.fa-github-alt,
.fa-github-square {
color: $github-color;
}
.fa-gitlab {
color: $gitlab-color;
}
.fa-instagram {
color: $instagram-color;
}
.fa-keybase {
color: #000
}
.fa-lastfm,
.fa-lastfm-square {
color: $lastfm-color;
}
.fa-linkedin,
.fa-linkedin-in {
color: $linkedin-color;
}
.fa-mastodon,
.fa-mastodon-square {
color: $mastodon-color;
}
.fa-pinterest,
.fa-pinterest-p,
.fa-pinterest-square {
color: $pinterest-color;
}
.fa-reddit {
color: $reddit-color;
}
.fa-rss,
.fa-rss-square {
color: $rss-color;
}
.fa-soundcloud {
color: $soundcloud-color;
}
.fa-stack-exchange,
.fa-stack-overflow {
color: $stackoverflow-color;
}
.fa-tumblr,
.fa-tumblr-square {
color: $tumblr-color;
}
.fa-twitter,
.fa-twitter-square {
color: $twitter-color;
}
.fa-vimeo,
.fa-vimeo-square,
.fa-vimeo-v {
color: $vimeo-color;
}
.fa-vine {
color: $vine-color;
}
.fa-youtube {
color: $youtube-color;
}
.fa-xing,
.fa-xing-square {
color: $xing-color;
}
}
/*
Navicons
========================================================================== */
.navicon {
position: relative;
width: $navicon-width;
height: $navicon-height;
background: $primary-color;
margin: auto;
-webkit-transition: 0.3s;
transition: 0.3s;
&:before,
&:after {
content: "";
position: absolute;
left: 0;
width: $navicon-width;
height: $navicon-height;
background: $primary-color;
-webkit-transition: 0.3s;
transition: 0.3s;
}
&:before {
top: (-2 * $navicon-height);
}
&:after {
bottom: (-2 * $navicon-height);
}
}
.close .navicon {
/* hide the middle line*/
background: transparent;
/* overlay the lines by setting both their top values to 0*/
&:before,
&:after {
-webkit-transform-origin: 50% 50%;
-ms-transform-origin: 50% 50%;
transform-origin: 50% 50%;
top: 0;
width: $navicon-width;
}
/* rotate the lines to form the x shape*/
&:before {
-webkit-transform: rotate3d(0, 0, 1, 45deg);
transform: rotate3d(0, 0, 1, 45deg);
}
&:after {
-webkit-transform: rotate3d(0, 0, 1, -45deg);
transform: rotate3d(0, 0, 1, -45deg);
}
}
.greedy-nav__toggle {
&:before {
@supports (pointer-events: none) {
content: '';
position: fixed;
top: 0;
left: 0;
width: 100%;
height: 100%;
opacity: 0;
background-color: $background-color;
-webkit-transition: $global-transition;
transition: $global-transition;
pointer-events: none;
}
}
&.close {
&:before {
opacity: 0.9;
-webkit-transition: $global-transition;
transition: $global-transition;
pointer-events: auto;
}
}
}
.greedy-nav__toggle:hover {
.navicon,
.navicon:before,
.navicon:after {
background: mix(#000, $primary-color, 25%);
}
&.close {
.navicon {
background: transparent;
}
}
}
/*
Sticky, fixed to top content
========================================================================== */
.sticky {
@include breakpoint($large) {
@include clearfix();
position: -webkit-sticky;
position: sticky;
top: 2em;
> * {
display: block;
}
}
}
/*
Wells
========================================================================== */
.well {
min-height: 20px;
padding: 19px;
margin-bottom: 20px;
background-color: #f5f5f5;
border: 1px solid #e3e3e3;
border-radius: $border-radius;
box-shadow: inset 0 1px 1px rgba(0, 0, 0, 0.05);
}
/*
Modals
========================================================================== */
.show-modal {
overflow: hidden;
position: relative;
&:before {
position: absolute;
content: "";
top: 0;
left: 0;
width: 100%;
height: 100%;
z-index: 999;
background-color: rgba(255, 255, 255, 0.85);
}
.modal {
display: block;
}
}
.modal {
display: none;
position: fixed;
width: 300px;
top: 50%;
left: 50%;
margin-left: -150px;
margin-top: -150px;
min-height: 0;
z-index: 9999;
background: #fff;
border: 1px solid $border-color;
border-radius: $border-radius;
box-shadow: $box-shadow;
&__title {
margin: 0;
padding: 0.5em 1em;
}
&__supporting-text {
padding: 0 1em 0.5em 1em;
}
&__actions {
padding: 0.5em 1em;
border-top: 1px solid $border-color;
}
}
/*
Footnotes
========================================================================== */
.footnote {
color: mix(#fff, $gray, 25%);
text-decoration: none;
}
.footnotes {
color: mix(#fff, $gray, 25%);
ol,
li,
p {
margin-bottom: 0;
font-size: $type-size-6;
}
}
a.reversefootnote {
color: $gray;
text-decoration: none;
&:hover {
text-decoration: underline;
}
}
/*
Required
========================================================================== */
.required {
color: $danger-color;
font-weight: bold;
}
/*
Google Custom Search Engine
========================================================================== */
.gsc-control-cse {
table,
tr,
td {
border: 0; /* remove table borders widget */
}
}
/*
Responsive Video Embed
========================================================================== */
.responsive-video-container {
position: relative;
margin-bottom: 1em;
padding-bottom: 56.25%;
height: 0;
overflow: hidden;
max-width: 100%;
iframe,
object,
embed {
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
}
}
// full screen video fixes
:-webkit-full-screen-ancestor {
.masthead,
.page__footer {
position: static;
}
}

BIN
assets/images/avatar.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 612 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 175 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 185 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 183 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 175 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 176 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 172 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 75 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 75 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 88 KiB