모의해킹 및 보안

ELK, ELK 설치

잉여토끼 2024. 12. 11. 19:02

ELK 등장 배경

모니터링

ssh : 접속시 시스템정보를 제공. 

snmp : snmp work를 실행하여 OID값을 취합하여 정보를 확인. 단, 갱신 시간 텀이 5분이라는 단점이 존재. 트래픽이 많기 떄문.

 

현제는 네트워크의 속도, 서버의 퍼포먼스, 저장소 용량이 모두 증가 --> 클라우드의 등장. 

 

클라우드로 인해 과거와 달리 다른 네트워크가 하나의 도매인으로 묶여 서버팜을 형성 시킬 수 있음. 따라서 네트워크가 대규모로 이루어짐.

보안솔루션은 대부분 국지적 특성을 가짐.

해당 네트워크와 도메인은 같은 지역적 위치를 가짐.

--> 통합 관리가 필요하다.

 

snmp의 Cacti나 mrtg같은 경우 부하가 심함. 이를 그나마 분산구축을 통해 부하를 줄일 수 있음.

ESM(Enterprise Security Menagement)의 경우 통합 로그 분석이 이루어지므로 분산구축을 통해서도 못함. 로그양이 너무 많음. --> 보안과 관련된 정보만을 필터링하여 저장. 이를 SIEM(Security Information and Event Management)이라 함.

SIEM은 시스템이 보내주는 정보 + 서비스 이용 정보를 취합하여 원하는 정보만을 뽑아냄.

이에 대표적인 시스템이 Elastic이다.


ESM

ESM : Enterprise Security Management, 통합 보안 관리

종류

  • IDS : Security Onion
  • IPS : pfSence
  • VPN : ASAv / Router
  • 인증 암호화
  • 백신
  • WAF : mod_security
  • 백업 --> 이기종 솔루션 : 통합

UTM

보안장비의 각자 로그 정보의 경우 밴더사마다 다름.

--> 통합이 어려움.

 

초기의 보안개념은 침입 차단 시스템(Firewall)이였으나 최근의 보안은 침입 탐지(IDS), 가상 사설망(VPN), 시스템 보안, 인증, 바이러스, 데이터 백업에 이르기까지 전문화 되는 양상을 띄고 있음.

 

각각의 보안 기술들이 가진 기밀성, 무결성, 가용성, 접근통제 등의 기능은 상호간의연동을 필요로 하며 연동 될 경우 더 큰 효과를 가질 수 있음. 

 

UTM(Unified Threat Management)

: 통합 보안장비

하나의 장비에서 여러 보안기능을 통합적으로 제공하여 다양하고 복잡한 보안 위협에 대응하고 편의성과 비용절감의 강점이 부각되며 네트워크 보안 시장에 거스를 수 없는 흐름.

 

기능

  • 방화벽 (Firewall)
  • VPN 
  • IPS/IDS 
  • 안티 바이러스/안티 스파이웨어
  • 안티 스팸
  • 웹 필터링(Web Filtering)
  • Appliation Control
  • L2/L3 Routing
  • Data Loss Prevention
  • Wan Optiomization
  • Wireless LAN Security

한계점

  • 사용자의 정책 관리 위주 제품 --> 실제 보안 효용성이 상대적으로 떨어짐.
  • 취약점과 위험요소 분석 및 모니터링

 


구조(Node)

Node 개념을 이용한 cluster 구축이 가능]

계층 구조를 가지며 cloud 환경에 적합.

 

SIEM (Security Information and Event Management)

기준점 : 시간(Time) --> NTP 동기화가 필요함. 지역적 특색을 가짐. (GMT +9:00(UTC))

정보를 수집(Logsash) --> 분석(Elastic Search) --> 결과값 도출 --> 문서화 --> 웹(Kibana)

Elastic Search

: DBMS(Timestamp/시간정보 기준을 부여 --> 하나의 시계열 상에서 모든 정보를 볼 수 있음.)

 

Elastic Search는 DBMS와 같다.

DBMS에서 DB 생성시 해당 DB가 검색 대상임

인덱스(Indise) --> 검색의 대상. 

DBMS와의 차이점은 하나의 DB처럼 되어있으나 흘러가고있는 DB임. 실시간으로 데이터가 수정되고 있음. 

--> 파이프라고 볼 수 있음

 

제어 Port : 9200/tcp

DATA Port : 9300/tcp

Logstash(로그수집기)

: Elastic Search가 수집까지 하면 부하가 큼 --> 해당 기능을 수행해줌. 호스트의 정보를 수집하여 Elastic에게 전달하는 역할. 수집시 필터링 및 포멧 통일화.

Kibana(GUI)

: Elastic Search 에게서 정보를 가져와서 그래프 형식의 GUI 형식으로 보여줌. 

Port : 5601/tcp

 

 

Beats

: 서버에서 자신 서버의 정보를 수집하여 Logstah나 Elastic Search에게 전달. 어디에 전달할지는 설정이 가능. 

 

 

 

 

1. 로그를 발생 할 수 있는 모든 디바이스 (서버 / 보안장비 / 솔루션 / 가상서버 (AWS_GCP_NHN))

2. 정보수집기(특정 Log) --> Elastic Search에 직접 전달 가능. LogStach에 전달 가능.

3. 정보수집기 : beats가 보내오는 정보를 취합

4. 정보수집기 : 로그 통합(필터링 / 표준화)

5. Logstash / Beats가 보낸 정보를 적재 및 분석

6. Elastic Search가 보낸 정보를 Visual화

 

 


켜기 / 끄기 순서

에러가 발생 할 수 있으므로 해당 순서를 지키는것이 좋음

켜기와 끄기는 역순

1. 켜기

Elastic Search --> Kibana --> Logstash --> Beats

 

2. 끄기

Beats--> Logstash --> Kibana --> Elastic Search 


설치

준비 : Elastic Search 동작서버 --> 메모리 최소 8기가

Elastic 4GB, Java 4GB가 필요. 

 

1. Elastic Search 설치

https://www.elastic.co/kr/downloads/elasticsearch

 

Download Elasticsearch

Download Elasticsearch or the complete Elastic Stack (formerly ELK stack) for free and start searching and analyzing in minutes with Elastic....

www.elastic.co

elasticsearch.repo

파일을 

/etc/yum.repos.d/에 생성

 

[elasticsearch]
name=Elasticsearch repository for 8.x packages
baseurl=https://artifacts.elastic.co/packages/8.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=0
autorefresh=1
type=rpm-md

 

설치

sudo dnf install --enablerepo=elasticsearch elasticsearch

 

설치시 키값 반드시 저장해두기

 

[root@localhost ~]# cd /etc/yum.repos.d/
[root@localhost yum.repos.d]# vim elasticsearch.repo
[root@localhost yum.repos.d]# sudo dnf install --enablerepo=elasticsearch elasticsearch
MariaDB                                                        10 kB/s | 3.4 kB     00:00
CentOS Stream 9 - BaseOS                                       11 kB/s | 7.2 kB     00:00
CentOS Stream 9 - BaseOS                                      7.5 MB/s | 8.4 MB     00:01
CentOS Stream 9 - AppStream                                    11 kB/s | 7.3 kB     00:00
CentOS Stream 9 - AppStream                                   9.2 MB/s |  21 MB     00:02
CentOS Stream 9 - Extras packages                              11 kB/s | 8.9 kB     00:00
Elasticsearch repository for 8.x packages                      53 MB/s |  80 MB     00:01
Extra Packages for Enterprise Linux 9 - x86_64                 15 kB/s |  17 kB     00:01
Extra Packages for Enterprise Linux 9 - x86_64                 13 MB/s |  23 MB     00:01
Extra Packages for Enterprise Linux 9 openh264 (From Cisco) - 1.4 kB/s | 993  B     00:00
Extra Packages for Enterprise Linux 9 - Next - x86_64          32 kB/s |  17 kB     00:00
Extra Packages for Enterprise Linux 9 - Next - x86_64         185 kB/s | 228 kB     00:01
Remi's Modular repository for Enterprise Linux 9 - x86_64     2.7 kB/s | 3.5 kB     00:01
Remi's Modular repository for Enterprise Linux 9 - x86_64     232 kB/s | 748 kB     00:03
Safe Remi's RPM repository for Enterprise Linux 9 - x86_64    2.6 kB/s | 3.0 kB     00:01
Safe Remi's RPM repository for Enterprise Linux 9 - x86_64    326 kB/s | 1.1 MB     00:03
종속성이 해결되었습니다.
==============================================================================================
 꾸러미                  구조             버전                  저장소                   크기
==============================================================================================
설치 중:
 elasticsearch           x86_64           8.16.1-1              elasticsearch           606 M

연결 요약
==============================================================================================
설치  1 꾸러미

전체 내려받기 크기: 606 M
설치된 크기 : 1.1 G
진행할까요? [y/N]: y
꾸러미 내려받기 중:
elasticsearch-8.16.1-x86_64.rpm                                45 MB/s | 606 MB     00:13
----------------------------------------------------------------------------------------------
합계                                                           45 MB/s | 606 MB     00:13
Elasticsearch repository for 8.x packages                      26 kB/s | 1.8 kB     00:00
GPG키 0xD88E42B4 가져오는 중:
사용자 ID : "Elasticsearch (Elasticsearch Signing Key) <dev_ops@elasticsearch.org>"
지문: 4609 5ACC 8548 582C 1A26 99A9 D27D 666C D88E 42B4
출처 : https://artifacts.elastic.co/GPG-KEY-elasticsearch
진행할까요? [y/N]: y
키 가져오기에 성공했습니다
연결 확인 실행 중
연결 확인에 성공했습니다.
연결 시험 실행 중
연결 시험에 성공했습니다.
연결 실행 중
  준비 중     :                                                                           1/1
  구현 중     : elasticsearch-8.16.1-1.x86_64                                             1/1
Creating elasticsearch group... OK
Creating elasticsearch user... OK

  설치 중     : elasticsearch-8.16.1-1.x86_64                                             1/1
  구현 중     : elasticsearch-8.16.1-1.x86_64                                             1/1
--------------------------- Security autoconfiguration information ------------------------------

Authentication and authorization are enabled.
TLS for the transport and HTTP layers is enabled and configured.

The generated password for the elastic built-in superuser is : z7S3Cwr40Y*CoT+uZujD

If this node should join an existing cluster, you can reconfigure this with
'/usr/share/elasticsearch/bin/elasticsearch-reconfigure-node --enrollment-token <token-here>'
after creating an enrollment token on your existing cluster.

You can complete the following actions at any time:

Reset the password of the elastic built-in superuser with
'/usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic'.

Generate an enrollment token for Kibana instances with
 '/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana'.

Generate an enrollment token for Elasticsearch nodes with
'/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s node'.

-------------------------------------------------------------------------------------------------
### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd
 sudo systemctl daemon-reload
 sudo systemctl enable elasticsearch.service
### You can start elasticsearch service by executing
 sudo systemctl start elasticsearch.service

/usr/lib/tmpfiles.d/elasticsearch.conf:1: Line references path below legacy directory /var/run/, updating /var/run/elasticsearch → /run/elasticsearch; please update the tmpfiles.d/ drop-in file accordingly.

  확인 중     : elasticsearch-8.16.1-1.x86_64                                             1/1

설치되었습니다:
  elasticsearch-8.16.1-1.x86_64

완료되었습니다!

 

daemon 및 서비스 재시작

 sudo systemctl daemon-reload
 sudo systemctl enable elasticsearch.service
 sudo systemctl start elasticsearch.service

 

설정파일 수정

Path : /etc/elasticsearch

# jvm.options

## 메모리 변경. 공백이 없어야 함.
-Xms4g
-Xmx4g

 

# elasticsearch.yml

## 클러스터 명
cluster.name: my-application

## Node 명
node.name: node-1

## Node Network 설정
network.host: 172.16.20.8

## 사용할 HTTP 포트
http.port: 9200

## 동작 확인 HOST IP
discovery.seed_hosts: ["127.0.0.1", "172.16.20.8"]

## 클러스트 노드 설정
## 에러 발생시 주석처리
## 혹은 109 line 에서 localhost 주석처리 
cluster.initial_master_nodes: ["node-1"]




## xpack 비활성화
xpack.security.enabled: false

xpack.security.enrollment.enabled: false

 

재시작

systemctl restart eleasticsearch.service

 

 

설치 확인

[root@localhost elasticsearch]# curl -XGET http://172.16.20.8:9200
{
  "name" : "node-1",
  "cluster_name" : "my-application",
  "cluster_uuid" : "2hDhpHcfS9yC1aHVyh6d4g",
  "version" : {
    "number" : "8.16.1",
    "build_flavor" : "default",
    "build_type" : "rpm",
    "build_hash" : "ffe992aa682c1968b5df375b5095b3a21f122bf3",
    "build_date" : "2024-11-19T16:00:31.793213192Z",
    "build_snapshot" : false,
    "lucene_version" : "9.12.0",
    "minimum_wire_compatibility_version" : "7.17.0",
    "minimum_index_compatibility_version" : "7.0.0"
  },
  "tagline" : "You Know, for Search"
}

 

 

2. Kibana 설치 --> WAS 필요

https://www.elastic.co/downloads/kibana

 

Download Kibana Free | Get Started Now

Download Kibana or the complete Elastic Stack (formerly ELK stack) for free and start visualizing, analyzing, and exploring your data with Elastic in minutes....

www.elastic.co

 

 

kibana.repo 파일을 /etc/yum.repos.d/에 생성

[kibana-8.x]
name=Kibana repository for 8.x packages
baseurl=https://artifacts.elastic.co/packages/8.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

 

설치 

sudo dnf install kibana

 

[root@localhost yum.repos.d]# sudo dnf install kibana -y
Kibana repository for 8.x packages                                                                                                                                                    5.7 MB/s |  80 MB     00:14
마지막 메타자료 만료확인(0:00:30 이전): 2024년 12월 11일 (수) 오후 04시 26분 00초.
종속성이 해결되었습니다.
======================================================================================================================================================================================================================
 꾸러미                                            구조                                              버전                                                 저장소                                                 크기
======================================================================================================================================================================================================================
설치 중:
 kibana                                            x86_64                                            8.16.1-1                                             kibana-8.x                                            333 M

연결 요약
======================================================================================================================================================================================================================
설치  1 꾸러미

전체 내려받기 크기: 333 M
설치된 크기 : 993 M
꾸러미 내려받기 중:
kibana-8.16.1-x86_64.rpm                                                                                                                                                               13 MB/s | 333 MB     00:24
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
합계                                                                                                                                                                                   13 MB/s | 333 MB     00:24
연결 확인 실행 중
연결 확인에 성공했습니다.
연결 시험 실행 중
연결 시험에 성공했습니다.
연결 실행 중
  준비 중     :                                                                                                                                                                                                   1/1
  구현 중     : kibana-8.16.1-1.x86_64                                                                                                                                                                            1/1
  설치 중     : kibana-8.16.1-1.x86_64                                                                                                                                                                            1/1
  구현 중     : kibana-8.16.1-1.x86_64                                                                                                                                                                            1/1
Creating kibana group... OK
Creating kibana user... OK

Kibana is currently running with legacy OpenSSL providers enabled! For details and instructions on how to disable see https://www.elastic.co/guide/en/kibana/8.16/production.html#openssl-legacy-provider
Created Kibana keystore in /etc/kibana/kibana.keystore

/usr/lib/tmpfiles.d/elasticsearch.conf:1: Line references path below legacy directory /var/run/, updating /var/run/elasticsearch → /run/elasticsearch; please update the tmpfiles.d/ drop-in file accordingly.

  확인 중     : kibana-8.16.1-1.x86_64                                                                                                                                                                            1/1

설치되었습니다:
  kibana-8.16.1-1.x86_64

완료되었습니다!

 

 

설정 --> Kibana 와 Elastic Search 연동

Path : /etc/kibana

 

# kibana.yml

## kibana port
server.port: 5601

## kibana host IP
server.host: "172.16.20.8"

## elastic search 연동
elasticsearch.hosts: ["http://172.16.20.8:9200"]

 

재시작 

[root@localhost kibana]# systemctl restart kibana.service
[root@localhost kibana]# systemctl status kibana.service
● kibana.service - Kibana
     Loaded: loaded (/usr/lib/systemd/system/kibana.service; disabled; preset: disabled)
     Active: active (running) since Wed 2024-12-11 16:51:50 KST; 8s ago
       Docs: https://www.elastic.co
   Main PID: 8619 (node)
      Tasks: 11 (limit: 48580)
     Memory: 332.8M
        CPU: 9.551s
     CGroup: /system.slice/kibana.service
             └─8619 /usr/share/kibana/bin/../node/glibc-217/bin/node /usr/share/kibana/bin/../src/cli/dist

12월 11 16:51:50 localhost.localdomain systemd[1]: Started Kibana.
12월 11 16:51:50 localhost.localdomain kibana[8619]: Kibana is currently running with legacy OpenSSL providers enabled! For details and instructions on how to disable see https://www.elastic.co/guide/en/kibana/8.1>
12월 11 16:51:51 localhost.localdomain kibana[8619]: {"log.level":"info","@timestamp":"2024-12-11T07:51:51.567Z","log.logger":"elastic-apm-node","ecs.version":"8.10.0","agentVersion":"4.7.3","env":{"pid":8619,"pro>
12월 11 16:51:51 localhost.localdomain kibana[8619]: Native global console methods have been overridden in production environment.
12월 11 16:51:52 localhost.localdomain kibana[8619]: [2024-12-11T16:51:52.941+09:00][INFO ][root] Kibana is starting
12월 11 16:51:52 localhost.localdomain kibana[8619]: [2024-12-11T16:51:52.980+09:00][INFO ][node] Kibana process configured with roles: [background_tasks, ui]

 

Kibana 접속

http://172.16.20.8:5601

 

 

 

 

3. Logstash 설치

https://www.elastic.co/kr/downloads/logstash

 

Download Logstash Free | Get Started Now

Download Logstash or the complete Elastic Stack (formerly ELK stack) for free and start collecting, searching, and analyzing your data with Elastic in minutes....

www.elastic.co

 

 

GPG키 등록

sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

 

/etc/yum.repos.d/ 디렉터리에 logstash.repo파일 생성

[logstash-8.x]
name=Elastic repository for 8.x packages
baseurl=https://artifacts.elastic.co/packages/8.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

 

설치

sudo yum install logstash

 

설정

Path : /etc/logstash/

# logstash.yml

# ------------ API Settings -------------
## api 사용 설정
api.enabled: true

## api ip
api.http.host: 172.16.20.8


## api port
api.http.port: 9600-9700

 

## logstash ssh Brute Force 탐지 예제

cd /etc/logstash/conf.d
echo "input {
        file {
                type => "Secure_log"
                path => "/var/log/secure"
        }
}
filter{
        grok {
                add_tag => [ "sshd_fail" ]
                match => { "message" => "Failed %{WORD:sshd_auth_type} for %{USERNAME:sshd_invalid_user} from %{IP:sshd_client_ip} port %{NUMBER:sshd_port} %{GREEDYDATA:sshd_protocol}" }
  }
}
output{
        elasticsearch {
                hosts => ["http://172.16.20.8:9200"]
                index => "sshd_fail-%{+YYYY.MM}"
        }
}" > sshd.conf

 

Secure_log 파일에서 match에 해당하는 에러 발생 시  add_tag에 해당하는 태그를 달아서 elasticsearch에 전달.

 

logstash 서비스가 /var/log/secrue에 접근이 가능해야하므로 permition을 확인해야 한다. 

[root@localhost conf.d]# chgrp logstash /var/log/secure
[root@localhost conf.d]# chmod 640 /var/log/secure

 

또다른 snort 예시

# snort 예시

input {
  file {
    path => "/var/log/snort/alert"
    type => "snort_log"
    start_position => beginning
    ignore_older => 0
    sincedb_path => "/dev/null"
  }
}

filter {
  if [type] == "snort_log" {
    grok {
      match => [ "message", "%{SNORTTIME:snort_time}\s+\[\*\*]\s+\[%{INT:gid}\:%{INT:sid}\:%{INT:rev}\]\s+\[%{DATA:atk_cat}\]\s+\[\*\*\]\s+\[Priority:\s+%{INT:priority}\]\s+{%{DATA:protocol}}\s+%{IP:src_ip}(\:%{INT:src_port})?\s+\-\>\s+%{IP:dst_ip}(\:%{INT:dst_port})?"]
      }
  }
  date {
    match => [ "snort_time", "MM/dd-HH:mm:ss.SSSSSS" ]
  }
  geoip {
    source => "src_ip"
    target => "geoip_snort_src"
  }
  geoip {
    source => "dst_ip"
    target => "geoip_snort_dst"
  }
  mutate {
    convert => {"[location][lat]" => "float"}
    convert => {"[location][lon]" => "float"}
  }
}


output {
          elasticsearch {
                hosts => ["http://172.16.20.8:9200"]
                index => "logstash_snort-%{+YYYY.MM}"
         }
}

 

4. Beats 설치

https://www.elastic.co/kr/downloads/beats

 

Beats 다운로드: Elasticsearch를 위한 데이터 수집기

Elastic Stack용 경량 데이터 수집기인 Beats를 사용하여 데이터를 Elasticsearch로 손쉽게 수집하고 로그 파일, CPU 메트릭, 네트워크 데이터 등을 처리하세요....

www.elastic.co

 

패킷 beat 설치

dnf -y install packetbeat

 

설정

Path : /etc/packetbeat

# packetbeat.yml

## Kibana의 dashboard setup 가능 여부
## elastic search 인덱스와 연동을 자동으로 함.
setup.dashboards.enabled: true

## kibana setting
setup.kibana:
  host: "172.16.20.8:5601"

## output 설정
## elastac search or logstash
output.elasticsearch:
  hosts: ["172.16.20.8:9200"]
  
#output.logstash:
  #hosts: ["172.16.20.8:5044"]

 

시작

systemctl start packetbeat

 

 

Dashboard Setup

elastic search 에서 정보를 받도록 설정

packetbeat setup --dashboards

# Heartbeat
# heartbeat setup -e

 

[root@localhost packetbeat]# packetbeat setup --dashboards
Loading dashboards (Kibana must be running and reachable)
Loaded dashboards

 

 

5. packetbeat 확인

Stack Management --> Index Management --> Data Streams
Discovery

 

 

 


Beats

Filebeat

  • 로그 파일 데이터를 수집
  • 시스템, 애플리케이션 등의 로그에서 사용
  • Web log 또는 machine log 등이 저장되는 파일 경로를 지정하기만 하면 Filebeat은 해당 경로에 적재되는 파일을 읽어들이며 새로운 내용이 추가될 때 마다 전송

[설치]

dnf install filebeat -y

 

# /etc/filebeat/filebeat.yml

filebeat.inputs:
- type: filestream
  id: my-filestream-id
  enabled: true
    - /var/log/*.log

setup.dashboards.enabled: true


setup.kibana:
  host: "172.16.20.8:5601"

output.elasticsearch:
  hosts: ["172.16.20.8:9200"]

 

systemctl start filebeat
filbeat setup --dashboards

 

[root@localhost filebeat]# filebeat setup --dashboards
Loading dashboards (Kibana must be running and reachable)
Loaded dashboards

 

Packetbeat

  • 설치된 시스템의 패킷을 스니핑 하여 트래픽을 모니터링하고 패킷 데이터를 수집
  • 네트워크 성능 모니터링 등에 사용
  • HTTP, MySQL, DNS등 프로토콜 데이터 분석 가능

Winlogbeat

  • Windows 이벤트 로그 데이터 수집
  • Windows 서버 및 클라이언트 로그에 사용 

Metricbeat

  • System, Service 등의 메트릭 데이터 수집
  • CPU 사용률, 메모리 상태, 디스크 I/O, 네트워크 트래픽 등의 모니터링에 사용
  • Docker과 같은 서버스의 메트릭 또한 수집 가능

[설치]

 dnf -y install metricbeat

 

# /etc/metricbeat/metricbeat.yml

metricbeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: true

setup.template.settings:
  index.number_of_shards: 1
  index.codec: best_compression
  _source.enabled: true


setup.dashboards.enabled: true


setup.kibana:
  host: "172.16.20.8:5601"
  
output.elasticsearch:
  hosts: ["172.16.20.8:9200"]
[root@localhost filebeat]# systemctl start metricbeat
[root@localhost filebeat]# metricbeat setup --dashboards
Loading dashboards (Kibana must be running and reachable)
Loaded dashboards

Heartbeat

  • Application 및 Service의 가용성을 모니터링
  • Web Service, DB, 기타 네트워크 서비스 상태 확인에 사용
  • ICMP, HTTP, TCP 등의 프로토콜을 활용하여 주기적인 상태를 점검

[설치]

dnf -y install heartbeat-elastic

 

# /etc/heartbeat/heartbeat.yml

heartbeat.config.monitors:
  path: ${path.config}/monitors.d/*.yml
  reload.enabled: true
  reload.period: 5s

heartbeat.monitors:
- type: http
  enabled: true
  id: my-monitor
  name: My Monitor
  urls: ["http://172.16.20.8:9200"]
  schedule: '@every 10s'


setup.template.settings:
  index.number_of_shards: 1
  index.codec: best_compression
  _source.enabled: true



setup.kibana:
  host: "172.16.20.8:5601"

output.elasticsearch:
  hosts: ["172.16.20.8:9200"]
  preset: balanced

 

[root@localhost filebeat]# systemctl start heartbeat-elastic.service
[root@localhost filebeat]# heartbeat setup -e
{"log.level":"info","@timestamp":"2024-12-11T18:59:34.184+0900","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/cmd/instance.(*Beat).configure","file.name":"instance/beat.go","file.line":1058},"message":"Home path: [/usr/share/heartbeat] Config path: [/etc/heartbeat] Data path: [/var/lib/heartbeat] Logs path: [/var/log/heartbeat]","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-12-11T18:59:34.184+0900","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/cmd/instance.(*Beat).configure","file.name":"instance/beat.go","file.line":1066},"message":"Beat ID: a7248814-935a-4655-8e1e-5d0ee345b29b","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-12-11T18:59:34.332+0900","log.logger":"beat","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/cmd/instance.(*Beat).createBeater","file.name":"instance/beat.go","file.line":570},"message":"Setup Beat: heartbeat; Version: 8.16.1","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-12-11T18:59:34.332+0900","log.logger":"beat","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/cmd/instance.(*Beat).logSystemInfo","file.name":"instance/beat.go","file.line":1619},"message":"Beat info","service.name":"heartbeat","system_info":{"beat":{"path":{"config":"/etc/heartbeat","data":"/var/lib/heartbeat","home":"/usr/share/heartbeat","logs":"/var/log/heartbeat"},"type":"heartbeat","uuid":"a7248814-935a-4655-8e1e-5d0ee345b29b"},"ecs.version":"1.6.0"}}
{"log.level":"info","@timestamp":"2024-12-11T18:59:34.333+0900","log.logger":"beat","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/cmd/instance.(*Beat).logSystemInfo","file.name":"instance/beat.go","file.line":1628},"message":"Build info","service.name":"heartbeat","system_info":{"build":{"commit":"f17e0828f1de9f1a256d3f520324fa6da53daee5","libbeat":"8.16.1","time":"2024-11-14T14:57:17.000Z","version":"8.16.1"},"ecs.version":"1.6.0"}}
{"log.level":"info","@timestamp":"2024-12-11T18:59:34.333+0900","log.logger":"beat","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/cmd/instance.(*Beat).logSystemInfo","file.name":"instance/beat.go","file.line":1631},"message":"Go runtime info","service.name":"heartbeat","system_info":{"go":{"os":"linux","arch":"amd64","max_procs":8,"version":"go1.22.9"},"ecs.version":"1.6.0"}}
{"log.level":"info","@timestamp":"2024-12-11T18:59:34.333+0900","log.logger":"beat","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/cmd/instance.(*Beat).logSystemInfo","file.name":"instance/beat.go","file.line":1637},"message":"Host info","service.name":"heartbeat","system_info":{"host":{"architecture":"x86_64","native_architecture":"","boot_time":"2024-12-11T15:18:13+09:00","containerized":false,"name":"localhost.localdomain","ip":["127.0.0.1","::1","172.16.20.8","fe80::20c:29ff:fee4:da27"],"kernel_version":"5.14.0-496.el9.x86_64","mac":["00:0c:29:e4:da:27"],"os":{"type":"linux","family":"redhat","platform":"centos","name":"CentOS Stream","version":"9","major":9,"minor":0,"patch":0},"timezone":"KST","timezone_offset_sec":32400,"id":"75fc1165d88849bbbaba1e6e7e8d758c"},"ecs.version":"1.6.0"}}
{"log.level":"info","@timestamp":"2024-12-11T18:59:34.334+0900","log.logger":"beat","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/cmd/instance.(*Beat).logSystemInfo","file.name":"instance/beat.go","file.line":1666},"message":"Process info","service.name":"heartbeat","system_info":{"process":{"capabilities":{"inheritable":null,"permitted":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read","perfmon","bpf","checkpoint_restore"],"effective":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read","perfmon","bpf","checkpoint_restore"],"bounding":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read","perfmon","bpf","checkpoint_restore"],"ambient":null},"cwd":"/etc/filebeat","exe":"/usr/share/heartbeat/bin/heartbeat","name":"heartbeat","pid":48906,"ppid":6687,"seccomp":{"mode":"disabled","no_new_privs":false},"start_time":"2024-12-11T18:59:33.230+0900"},"ecs.version":"1.6.0"}}
{"log.level":"info","@timestamp":"2024-12-11T18:59:34.341+0900","log.logger":"elasticsearch","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/outputs/elasticsearch.makeES","file.name":"elasticsearch/elasticsearch.go","file.line":63},"message":"Applying performance preset 'balanced': {\n  \"bulk_max_size\": 1600,\n  \"compression_level\": 1,\n  \"idle_connection_timeout\": \"3s\",\n  \"queue\": {\n    \"mem\": {\n      \"events\": 3200,\n      \"flush\": {\n        \"min_events\": 1600,\n        \"timeout\": \"10s\"\n      }\n    }\n  },\n  \"worker\": 1\n}","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"warn","@timestamp":"2024-12-11T18:59:34.341+0900","log.logger":"elasticsearch","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/outputs/elasticsearch.makeES","file.name":"elasticsearch/elasticsearch.go","file.line":66},"message":"Performance preset 'balanced' overrides user setting for field 'bulk_max_size'","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-12-11T18:59:34.342+0900","log.logger":"esclientleg","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/esleg/eslegclient.NewConnection","file.name":"eslegclient/connection.go","file.line":133},"message":"elasticsearch url: http://172.16.20.8:9200","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-12-11T18:59:34.342+0900","log.logger":"publisher","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/publisher/pipeline.LoadWithSettings","file.name":"pipeline/module.go","file.line":105},"message":"Beat name: localhost.localdomain","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-12-11T18:59:34.343+0900","log.logger":"esclientleg","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/esleg/eslegclient.NewConnection","file.name":"eslegclient/connection.go","file.line":133},"message":"elasticsearch url: http://172.16.20.8:9200","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-12-11T18:59:34.345+0900","log.logger":"esclientleg","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/esleg/eslegclient.(*Connection).Ping","file.name":"eslegclient/connection.go","file.line":322},"message":"Attempting to connect to Elasticsearch version 8.16.1 (default)","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-12-11T18:59:34.345+0900","log.origin":{"function":"github.com/elastic/beats/v7/heartbeat/beater.New.AtomicStateLoader.func3","file.name":"monitorstate/tracker.go","file.line":157},"message":"Updated atomic state loader","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-12-11T18:59:34.345+0900","log.origin":{"function":"github.com/elastic/beats/v7/heartbeat/scheduler.getJobLimitSem","file.name":"scheduler/scheduler.go","file.line":79},"message":"limiting to 2 concurrent jobs for 'browser' type","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-12-11T18:59:34.346+0900","log.origin":{"function":"github.com/elastic/beats/v7/heartbeat/beater.New","file.name":"beater/heartbeat.go","file.line":144},"message":"heartbeat starting, running from: <unknown location>","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-12-11T18:59:34.346+0900","log.logger":"esclientleg","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/esleg/eslegclient.NewConnection","file.name":"eslegclient/connection.go","file.line":133},"message":"elasticsearch url: http://172.16.20.8:9200","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-12-11T18:59:34.347+0900","log.logger":"esclientleg","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/esleg/eslegclient.(*Connection).Ping","file.name":"eslegclient/connection.go","file.line":322},"message":"Attempting to connect to Elasticsearch version 8.16.1 (default)","service.name":"heartbeat","ecs.version":"1.6.0"}
Overwriting lifecycle policy is disabled. Set `setup.ilm.overwrite: true` to overwrite.
{"log.level":"info","@timestamp":"2024-12-11T18:59:34.347+0900","log.logger":"index-management","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/idxmgmt.(*indexManager).Setup","file.name":"idxmgmt/index_support.go","file.line":254},"message":"Auto lifecycle enable success.","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-12-11T18:59:34.350+0900","log.logger":"index-management.ilm","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/idxmgmt/lifecycle.(*stdManager).EnsurePolicy","file.name":"lifecycle/standard_manager.go","file.line":111},"message":"lifecycle policy heartbeat exists already.","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-12-11T18:59:34.350+0900","log.logger":"index-management","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/idxmgmt.applyLifecycleSettingsToTemplate","file.name":"idxmgmt/index_support.go","file.line":402},"message":"Set settings.index.lifecycle.name in template to heartbeat as ILM is enabled.","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-12-11T18:59:34.353+0900","log.logger":"template","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/template.(*templateBuilder).buildBody","file.name":"template/load.go","file.line":263},"message":"Existing template will be overwritten, as overwrite is enabled.","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-12-11T18:59:34.472+0900","log.logger":"template_loader","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/template.(*ESLoader).loadTemplate","file.name":"template/load.go","file.line":177},"message":"Try loading template heartbeat-8.16.1 to Elasticsearch","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-12-11T18:59:34.497+0900","log.logger":"template_loader","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/template.(*ESLoader).Load","file.name":"template/load.go","file.line":134},"message":"Template with name \"heartbeat-8.16.1\" loaded.","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-12-11T18:59:34.498+0900","log.logger":"template_loader","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/template.(*ESLoader).Load","file.name":"template/load.go","file.line":150},"message":"Data stream with name \"heartbeat-8.16.1\" already exists.","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-12-11T18:59:34.499+0900","log.logger":"index-management","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/idxmgmt.(*indexManager).Setup","file.name":"idxmgmt/index_support.go","file.line":299},"message":"Loaded index template.","service.name":"heartbeat","ecs.version":"1.6.0"}
Index setup finished.

 

Auditbeat

  • 사용자의 활동이나 파일 엑세스, 보안 이벤트 등을 모니터링
  • 시스템 보안 모니터링, 사용자 행위 감사, 정책 준수 확인
  • Linux Audit Framework와 연동이 가능하다

Functionbeat

  • 클라우드 환경의 Serverless 데이터 수집
  • AWS Lambda를 활용한 이벤트 기반 데이터 수집

 

 


Elastic Index

조회

[root@localhost elasticsearch]# curl http://172.16.20.8:9200/_aliases?pretty
{ }

 

 

Indice(인덱스) 생성

[root@localhost elasticsearch]# curl -X PUT http://172.16.20.8:9200/test_index
{"acknowledged":true,"shards_acknowledged":true,"index":"test_index"}

[root@localhost elasticsearch]# curl http://172.16.20.8:9200/_aliases?pretty
{
  "test_index" : {
    "aliases" : { }
  }
}

 

Indice(인덱스) 리스트 조회

[root@localhost elasticsearch]# curl -XGET http://172.16.20.8:9200/_cat/indices?v
health status index      uuid                   pri rep docs.count docs.deleted store.size pri.store.size dataset.size
yellow open   test_index u7eRVfPtSIyXhGpOBiEAfA   1   1          0            0       227b           227b         227b

 

status 가 open이여야 파이프 단위로 정보가 수신이 가능.

 

Shard 조회

Elastic Search 요청 결과 --> Shard(샤드) 

샤드는 DBMS의 View와 같음. 

[root@localhost elasticsearch]# curl -XGET http://172.16.20.8:9200/_cat/shards?v
index      shard prirep state      docs store dataset ip          node
test_index 0     p      STARTED       0  249b    249b 172.16.20.8 node-1
test_index 0     r      UNASSIGNED