Author Archive

Ansible Error: template error while templating string: Encountered unknown tag ‘do’

Just add to ansible.cfg module jinja2.ext.do

[defaults]

# some basic default values...

jinja2_extensions=jinja2.ext.do

Minio setup

Minio it is self-hosted storage compartible with s3 API

Create key and secret

cat /proc/cpuinfo /proc/iomem | sha512sum | awk '{print "\nkey = " substr($1,1,24) "\nsecret = " substr($1,25,64) }'

Make nomad file

job "minio-test" {
  datacenters = ["dc1"]
  type = "service"

  group "minio" {
    count = 1

    task "minio-test" {
      driver = "docker"

      config {
        image = "minio/minio:edge"
        args = [ "server",  "/data" ]
        port_map {
          http = 9000
        }
        volumes = [
          "/tmp/minio-test/data:/data",
          "/tmp/minio-test/config:/root/.minio"
        ]
      }
      env {
        "MINIO_ACCESS_KEY" = "642a51...eff82eb",
        "MINIO_SECRET_KEY" = "0c936a31990cc515a0da1e748....7795c74cfcc60cc256f47880db7b"
      }
      
      resources {
        cpu    = 600 # 600 MHz
        memory = 1024 # 1024 MB
        network {
          mbits = 100
          port "http" {
              static = 12345
          }
        }
      }
      service {
        name = "minio-test"
        tags = [ "minio", "web", "urlprefix-/minio" ]
        port = "http"
        check {
          type     = "tcp"
          interval = "60s"
          timeout  = "5s"
        }
      }
    }
  }
}

Get mc – CLI for Minio

wget https://dl.min.io/client/mc/release/linux-amd64/mc
mv mc /usr/local/mc
chmod +x /usr/local/mc

Create connection to storage and add bucket

mc config host add minio-cloud http://10.0.0.1:10002 <key> <secret>
mc mb minio-cloud/reverse

Create file with policy user.json

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "s3:PutBucketPolicy",
        "s3:GetBucketPolicy",
        "s3:DeleteBucketPolicy",
        "s3:ListAllMyBuckets",
        "s3:ListBucket"
      ],
      "Effect": "Allow",
      "Resource": [
        "arn:aws:s3:::reverse"
      ],
      "Sid": ""
    },
    {
      "Action": [
        "s3:AbortMultipartUpload",
        "s3:DeleteObject",
        "s3:GetObject",
        "s3:ListMultipartUploadParts",
        "s3:PutObject"
      ],
      "Effect": "Allow",
      "Resource": [
        "arn:aws:s3:::reverse/*"
      ],
      "Sid": ""
    }
  ]
}

Add policy

mc admin policy add minio-cloud user user.json
mc admin policy info minio-cloud user

Add user

mc admin user add minio-cloud <user> <pass>

Apply policy to user

mc admin policy set minio-cloud user user=user
mc admin user list minio-cloud --json

Go to another server/OS configure connect to minio and test upload

mc cp /var/log/messages minio-cloud/reverse/log/

Done

Documentation – https://docs.min.io/docs/minio-client-complete-guide.html

TCP state diagram

Ansible. nmcli module. Failed to import the required Python library

In new version of Ubuntu/Centos need to use new name of imported module. When using old nmcli.py i had this error:

Failed to import the required Python library (NetworkManager glib API) on node's Python /usr/bin/python3. Please read module documentation and install in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter"}

when running playbook:

  - nmcli:
      conn_name: my-eth1
      ifname: eth1
      type: ethernet
      ip4: 192.0.2.100/24
      gw4: 192.0.2.1
      ip6: '2001:db8::cafe'
      gw6: '2001:db8::1'
      state: present

from https://docs.ansible.com/ansible/2.5/modules/nmcli_module.html

or this error when try to use NMClient from python script:

Traceback (most recent call last):
File "test.py", line 6, in
gi.require_version('NMClient', '1.2')
File "/usr/lib/python3/dist-packages/gi/init.py", line 129, in require_version
raise ValueError('Namespace %s not available' % namespace)
ValueError: Namespace NMClient not available

If we look in source code of module nmcli in line 567 ( /usr/local/lib/python3.6/dist-packages/ansible/modules/net_tools/nmcli.py ):

try:
    import gi
    gi.require_version('NMClient', '1.0')
    gi.require_version('NetworkManager', '1.0')

    from gi.repository import NetworkManager, NMClient

In new version we need to import new name NM , like this:

try:
    import gi
    gi.require_version('NM', '1.0')

    from gi.repository import NM

And playbook work’s as expected

https://github.com/ansible/ansible/pull/65726/commits/dbe0c95a7a919e87695b2c3e910e12cb7ad02371

https://developer.gnome.org/libnm/stable/usage.html

Golang: replace module by own

When you need to do fast fix for some module without to push it use option replace in go.mod

For example, i want to check – my application will be work with changes in branch ?

cd /tmp/<my code>
git clone github.com/<repo>
git checkout branch-with-fix
nano go.mod

and add replace for that repo

require (
        github.com/<repo> v0.1.1
        github.com/pkg/errors v0.9.1 // indirect
        go.opencensus.io v0.22.3 // indirect
)

replace github.com/<repo> v0.1.1 => /tmp/<mycode>/<repo>

Make simple Kubernetes cluster and deploy own Golang application

Main goal to create simple cluster with one master node and two worker-nodes .

All procedures we will do on 3 VPS with Ubuntu 20.04 Server installation. First step is to create separate user on all VPS, for the test environment let it be ‘kubuntu’. For orchestration we will use Ansible

Lets create directory for Ansible and make inventory file

mkdir kube
cd kube
touch hosts

Write config to hosts file

---
masters:
  hosts:
    kmaster:
      ansible_host: 10.0.1.7
workers:
  hosts:
    kworker1:
      ansible_host: 10.0.1.8
    kworker2:
      ansible_host: 10.0.1.9

Now we want to create playbook initial.yml for Ansible that create user on VPS

- hosts: all
  become: yes
  tasks:
    - name: create the 'kubuntu' user
      user: name=kubuntu append=yes state=present createhome=yes shell=/bin/bash
    - name: allow 'kubuntu' to have passwordless sudo
      lineinfile:
        dest: /etc/sudoers
        line: 'kubuntu ALL=(ALL) NOPASSWD: ALL'
        validate: 'visudo -cf %s'
    - name: set up authorized keys for the ubuntu user
      authorized_key: user=kubuntu key="{{item}}"
      with_file:
        - ~/.ssh/id_rsa.pub

Run playbook

ansible-playbook -i hosts initial.yml

Installing packages for Kubernetes, but first we need create template for configuring Docker. It’s need for disable cgroupfs as default docker driver and set systemd. Please follow the guide at https://kubernetes.io/docs/setup/cri/

Make file templates/docker.daemon.j2

{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}

Maybe need to edit nano /var/lib/kubelet/kubeadm-flags.env for systemd

Now create playbook kube-dependencies.yml

- hosts: all
  become: yes
  tasks:
   - name: install Docker
     apt:
       name: docker.io
       state: present
       update_cache: true
   - name: Enable systemD cgroups
     template:
       src: templates/docker.daemon.j2
       dest: /etc/docker/daemon.json

   - name: Start Docker
     service:
       name: docker
       state: started
       enabled: yes

   - name: install APT Transport HTTPS
     apt:
       name: apt-transport-https
       state: present

   - name: add Kubernetes apt-key
     apt_key:
       url: https://packages.cloud.google.com/apt/doc/apt-key.gpg
       state: present

   - name: add Kubernetes' APT repository
     apt_repository:
      repo: deb http://apt.kubernetes.io/ kubernetes-xenial main
      state: present
      filename: 'kubernetes'

   - name: install kubelet
     apt:
       name: kubelet=1.18.0-00
       state: present
       update_cache: true

   - name: install kubeadm
     apt:
       name: kubeadm=1.18.0-00
       state: present

   - name: Disable SWAP since kubernetes can't work with swap enabled (1/2)
     become: yes
     shell: |
       swapoff -a

   - name: Disable SWAP in fstab since kubernetes can't work with swap enabled (2/2)
     become: yes
     replace:
       path: /etc/fstab
       regexp: '^([^#].*?\sswap\s+sw\s+.*)$'
       replace: '# \1'



- hosts: master
  become: yes
  tasks:
   - name: install kubectl
     apt:
       name: kubectl=1.18.0-00
       state: present
       force: yes

In this playbook we install latest kubelet, kubead version ( 1.18), disable swap and remove it from fstab. On master node additionaly setup kebectl

Make playbook masters.yml for configuring master node

- hosts: masters
  become: yes
  tasks:
    - name: initialize the cluster
#      shell: kubeadm init --pod-network-cidr=10.244.0.0/16 >> cluster_initialized.txt
      shell: kubeadm init --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=all >> cluster_initialized.txt
      args:
        chdir: $HOME
        creates: cluster_initialized.txt

    - name: create .kube directory
      become: yes
      become_user: kubuntu
      file:
        path: $HOME/.kube
        state: directory
        mode: 0755

    - name: copy admin.conf to user's kube config
      copy:
        src: /etc/kubernetes/admin.conf
        dest: /home/kubuntu/.kube/config
        remote_src: yes
        owner: kubuntu

    - name: install Pod network
      become: yes
      become_user: kubuntu
      tags: flannel
      shell: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml >> pod_network_setup.txt
      args:
        chdir: $HOME
        creates: pod_network_setup.txt

Run playbook:

ansible-playbook -i hosts masters.yml

In my case i used option flag ‘-ignore-preflight-errors=all’ because Kubernetes need 2 or more CPU cores and my VPS have only 1.

Go to the master node and check that cluster is up and running

kubectl get nodes
NAME      STATUS   ROLES    AGE     VERSION
kmaster   Ready    master   8m35s   v1.18.0

If everything OK we can start add worker-node to cluster. Create workers.yml

- hosts: masters
  become: yes
  gather_facts: false
  tasks:
    - name: get join command
      shell: kubeadm token create --print-join-command
      register: join_command_raw

    - name: set join command
      set_fact:
        join_command: "{{ join_command_raw.stdout_lines[0] }}"


- hosts: workers
  become: yes
  tasks:
    - name: join cluster
      shell: "{{ hostvars['kmaster'].join_command }} >> node_joined.txt"
      args:
        chdir: $HOME
        creates: node_joined.txt

Run playbook

ansible-playbook -i hosts workers.yml

Check workers in cluster

kubectl get nodes
NAME       STATUS   ROLES    AGE    VERSION
kmaster    Ready    master   15m    v1.18.0
kworker1   Ready    <none>   116s   v1.18.0
kworker2   Ready    <none>   117s   v1.18.0

Test our cluster by adding test nginx service that only show “Hello page”

kubuntu@kmaster:~$ kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
kubuntu@kmaster:~$ kubectl expose deploy nginx --port 80 --target-port 80 --type NodePort
service/nginx exposed
kubuntu@kmaster:~$ kubectl get services
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        16m
nginx        NodePort    10.106.156.95   <none>        80:32709/TCP   12s

If we are in the same network as nginx than just go to browser and open page http://10.106.156.95 . Or we can use curl

curl -v 10.106.156.95
*   Trying 10.106.156.95:80...
* TCP_NODELAY set
* Connected to 10.106.156.95 (10.106.156.95) port 80 (#0)
> GET / HTTP/1.1
> Host: 10.106.156.95
> User-Agent: curl/7.68.0
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Server: nginx/1.17.10
< Date: Mon, 27 Apr 2020 14:12:48 GMT
< Content-Type: text/html
< Content-Length: 612
< Last-Modified: Tue, 14 Apr 2020 14:19:26 GMT
< Connection: keep-alive
< ETag: "5e95c66e-264"
< Accept-Ranges: bytes
< 
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>

Works as expected. Cleaning up

kubectl delete service nginx

Chapter 2. Deploy own Golang app

Create directory for application

mkdir go_app
cd go_app
nano main.go

Code inside

package main

import (
	"fmt"
	"time"

	"github.com/gin-gonic/gin"
	"os"
	"gopkg.in/yaml.v2"
	"io/ioutil"
	"log"
)

type conf struct {
    Text string `yaml:"text"`
    Code int `yaml:"code"`
}

func (c *conf) getConf() *conf {
    filename := "/etc/ping/config.yml"
//    filename := "config.yml"
    _, err := os.Stat(filename)
    if os.IsNotExist(err) {
	c.Text = "Pong"
	c.Code = 200
    } else {
      yamlFile, err := ioutil.ReadFile(filename)
      if err != nil {
        log.Printf("yamlFile.Get err   #%v ", err)
      }
      err = yaml.Unmarshal(yamlFile, c)
      if err != nil {
        log.Fatalf("Unmarshal: %v", err)
      }
    }

    return c
}


func main() {
	
	var c conf
        c.getConf()
	router := gin.New()

	// LoggerWithFormatter middleware will write the logs to gin.DefaultWriter
	// By default gin.DefaultWriter = os.Stdout
	router.Use(gin.LoggerWithFormatter(func(param gin.LogFormatterParams) string {

		// your custom format
		return fmt.Sprintf("%s - [%s] \"%s %s %s %s %d %s \"%s\" %s BODY: %s\"\n",
			param.ClientIP,
			param.TimeStamp.Format(time.RFC1123),
			param.Method,
			param.Request.Host,
			param.Path,
			param.Request.Proto,
			param.StatusCode,
			param.Latency,
			param.Request.UserAgent(),
			param.ErrorMessage,
			param.Request.Body,
		)
	}))
	router.Use(gin.Recovery())

	router.GET("/ping", func(g *gin.Context) {
		g.String(c.Code, c.Text)
	})
	
	router.Run(":9991")

	
}

Thin app get request on endpoint /ping log this request to stdout and send answer to client with text “pong”. If config.yml exist then we override answer and code:

text: pong by config
code: 200

Main code was taken from post on Mediun – https://medium.com/google-cloud/deploy-go-application-to-kubernetes-in-30-seconds-ebff0f51d67b with some changes. So, for start, lets build our app:

go build 
go run main.go

If everything ok than start to create Makefile for automation.

cat Makefile 
#TAG?=$(shell git rev-list HEAD --max-count=1 --abbrev-commit)
TAG=0.1.2
export TAG

install:
	go get .

build: install
	go build -ldflags  "-X main.version=$(TAG) -w -s -linkmode external -extldflags -static"  -o  ping_app .

pack: build
	GOOS=linux make build
	docker build -t <registry-host>/myproject/ping_app:$(TAG) .

upload: pack
	docker push <registry-host>/myproject/ping_app:$(TAG)
deploy:
	envsubst < k8s/deployment.yml | kubectl apply -f -

If you code tracked by Git – use first line for assign TAG var, or use static version like in second line. make command now have some option that include some actions. For example pack use previous action build and after build Docker image with TAG. If you do not have your private docker registry, start it by docker-compose via manual – https://docs.docker.com/registry/ or https://www.digitalocean.com/community/tutorials/how-to-set-up-a-private-docker-registry-on-top-of-digitalocean-spaces-and-use-it-with-digitalocean-kubernetes

For creating docker image we need to create Dockerfile with actions for start out app

cat Dockerfile 
FROM alpine:3.4

RUN apk -U add ca-certificates

EXPOSE 9991

ADD ping_app /bin/ping_app
#RUN chmod +x /bin/ping_app
ADD config.yml /etc/ping/config.yml

CMD ["/bin/ping_app", "-config", "/etc/ping/config.yml"]

This Docker file start/create image with our binary and config file, and expose port 9991

When we run make pack and make upload our image creates and upload to registry, now we need only deploy image/service to kubernetes ( deploy action in Makefile)

cat k8s/deployment.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ping-service
  labels:
    app: ping-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ping-service
  template:
    metadata:
      labels:
        app: ping-service
    spec:
      imagePullSecrets:
      - name: regcred
      containers:
      - name: ping-service
        image: <registry-host>/myproject/ping_app:0.1.3
        command:
        ports:
          - containerPort: 9991
        volumeMounts:
          - name: ping-config
            mountPath: /etc/ping/
            readOnly: true
      volumes:
        - name: ping-config
          configMap: { name: ping-config }
---
kind: Service
apiVersion: v1
metadata:
  name: ping-service
spec:
  type: LoadBalancer
  selector:
    app: ping-service
  ports:
  - protocol: TCP
    port: 80
    targetPort: 9991

Some comments…

selector: matchLabels: it is like a link for knowing what servise for what deployment need to use

imagePullSecrets: - name: regcred – i have Docker registry with credentials, so we need somehow to say that kubernetes need to use login/password for pulling images. Simplest way to do this it is a login to Docker-registry docker login my-registry-url:5000 and they will create config to <homeDir>/.docker/config . Add this config to our Kubernetes via command:

kubectl create secret generic regcred \
    --from-file=.dockerconfigjson=/home/kubuntu/.docker/config.json \
    --type=kubernetes.io/dockerconfigjson

We register secret with name regcred to cluster and use it in imagePullSecrets

This part of config need to pass our config to cluster, but we need way to update config without rebuild all image in registry.

        volumeMounts:
          - name: ping-config
            mountPath: /etc/ping/
            readOnly: true
      volumes:
        - name: ping-config
          configMap: { name: ping-config }

We will use config-map as volume and mount to /etc/ping/ . How to create config in cluster? simple…

cat k8s/configmap.yml 
kind: ConfigMap
apiVersion: v1
metadata:
  name: ping-config
data:
  config.yml: |-
    text: pong by config
    code: 200

run kubectl apply -f configmap.yml

With this map we can update our configmap and file config.yml will be changed in cluster automaticaly.

Last step is to run make deploy

make deploy
envsubst < k8s/deployment.yml | kubectl apply -f -
deployment.apps/ping-service configured
service/ping-service changed

See what we have:

go_app $ kubectl get services
NAME           TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes     ClusterIP      10.96.0.1       <none>        443/TCP        3d23h
ping-service   LoadBalancer   10.98.133.140   <pending>     80:31842/TCP   25h

go_app $ kubectl get deployments.apps 
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
nginx          1/1     1            1           3d23h
ping-service   1/1     1            1           25h

go_app $ kubectl get pods
NAME                           READY   STATUS    RESTARTS   AGE
nginx-f89759699-7j8v6          1/1     Running   1          3d23h
ping-service-d545db97b-496p4   1/1     Running   0          25h

we can do curl -v 10.98.133.140 and cluster answers pong via config

Tutorials used:

dial unix /var/run/docker.sock: connect: permission denied

First, add your user to group docker

sudo usermod -aG docker <username>

And restart Docker daemon

sudo systemctl restart docker

If this not help, change acl for file access:

sudo setfacl --modify user:<username>:rw /var/run/docker.sock

Howto detele archive in AWS Glacier Vault

Post writed only as note for me. All information fount on docs https://docs.aws.amazon.com/cli/latest/reference/glacier/delete-archive.html and from github https://gist.github.com/veuncent/ac21ae8131f24d3971a621fac0d95be5

First step: create job for retrieve information in Vault

Read more

Lihgthouse on Debian

Install latest Chrome

apt-get install xvfb imagemagick
wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
dpkg -i google-chrome-stable_current_amd64.deb

install NodeJS and NPM

apt install nodejs
apt install npm
nodejs -v

Install Lighthouse

npm install -g lighthouse

Run test

lighthouse https://reverse.org.ua --chrome-flags="--no-sandbox --headless --disable-gpu"

WireGuard on Kernel 5.6. Quick start

cd /tmp
wget https://kernel.ubuntu.com/~kernel-ppa/mainline/v5.6/linux-headers-5.6.0-050600_5.6.0-050600.202003292333_all.deb
wget https://kernel.ubuntu.com/~kernel-ppa/mainline/v5.6/linux-headers-5.6.0-050600-generic_5.6.0-050600.202003292333_amd64.deb
wget https://kernel.ubuntu.com/~kernel-ppa/mainline/v5.6/linux-image-unsigned-5.6.0-050600-generic_5.6.0-050600.202003292333_amd64.deb
wget https://kernel.ubuntu.com/~kernel-ppa/mainline/v5.6/linux-modules-5.6.0-050600-generic_5.6.0-050600.202003292333_amd64.deb

Install all downloaded deb’s

dpkg -i *.deb

Reboot server/PC by command reboot

After startup check the kernel version^

uname -a
Linux test-srv 5.6.0-050600-generic #202003292333 SMP Sun Mar 29 23:35:58 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

Test Wireguard on server

ip link add dev wg0 type wireguard
ip address add dev wg0 192.168.2.1/24
#get current state:
ip a s wg0
--
3: wg0: <POINTOPOINT,NOARP> mtu 1420 qdisc noop state DOWN group default qlen 1000
    link/none 
    inet 192.168.2.1/24 scope global wg0
       valid_lft forever preferred_lft forever

Add repository for ubuntu 18.04

add-apt-repository ppa:wireguard/wireguard 
apt-get update
apt-get install wireguard-tools resolvconf

Make some changes to Firewall on server

# to enable kernel relaying/forwarding ability on bounce servers
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
echo "net.ipv4.conf.all.proxy_arp = 1" >> /etc/sysctl.conf
sudo sysctl -p /etc/sysctl.conf

# to add iptables forwarding rules on bounce servers
iptables -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -i wg0 -o wg0 -m conntrack --ctstate NEW -j ACCEPT
iptables -t nat -A POSTROUTING -s 192.168.2.0/24 -o eth0 -j MASQUERADE

Simple script for generating key pairs and base cofigs for server and client

#!/bin/bash

HOST=wg.reverse.org.ua
PORT=32001
S_IP=192.168.2.1/24
C_IP=192.168.2.3/32

#create Server key
`wg genkey | tee wg-s-private.key | wg pubkey > wg-s-public.key`
#create Client key
`wg genkey | tee wg-c-private.key | wg pubkey > wg-c-public.key`

S_PRIV_KEY=`cat wg-s-private.key`
S_PUB_KEY=`cat wg-s-public.key`
C_PRIV_KEY=`cat wg-c-private.key`
C_PUB_KEY=`cat wg-c-public.key`


cat >wg0.server <<EOF
[Interface]
Address = ${S_IP}
ListenPort = ${PORT}
PrivateKey = ${S_PRIV_KEY}
DNS = 1.1.1.1,8.8.8.8

[Peer]
# Name = notebook
PublicKey = ${C_PUB_KEY}
AllowedIPs = ${C_IP}
EOF

cat >wg0.client <<EOF
[Interface]
# Name = laptop
Address = ${C_IP}
PrivateKey = ${C_PRIV_KEY}
DNS = 1.1.1.1,8.8.8.8
# If you have additional local networks, add static routes for it
#PostUp = ip route add 10.97.0.0/16 via 10.0.1.1; 
#PreDown = ip route delete 10.97.0.0/16

[Peer]
Endpoint = ${HOST}:${PORT}
PublicKey = ${S_PUB_KEY}
# routes traffic to itself and entire subnet of peers as bounce server
AllowedIPs = ${S_IP},0.0.0.0/0,::/0
PersistentKeepalive = 25

EOF

Or download here:

Put wg0.server as /etc/wireguard/wg0.conf on Server side and wg0.client on Client side in the same place

Startup interface on machines

wg-quick up wg0

For Android clients you can use config file as QR code

qrencode -t ansiutf8 < wg0.client

Scripts for streaming desktop

ffmpeg \
   -video_size 1920x1080 -framerate 60 \
  -f x11grab -i :0.0+100,200 \
  -f alsa -i default \
  -f webm -cluster_size_limit 2M -cluster_time_limit 5100 -content_type video/webm \
  -c:a libvorbis -b:a 96K \
  -c:v libvpx -b:v 1.5M -crf 30 -g 150 -deadline good -threads 4 \
  icecast://source:hackme@localhost:8754/stream.webm

# http://localhost:8754/stream.webm

Source – https://gitlab.com/guoyunhe/plasma-cast/blob/master/stream.sh

After you can use – https://github.com/balloob/pychromecast/