- Logging with Fluentd
- Before you begin
- Setup Fluentd
- Example Fluentd, Elasticsearch, Kibana Stack
- Configure Istio
- View the new logs
- Cleanup
- See also
Logging with Fluentd
This task shows how to configure Istio to create custom log entriesand send them to a Fluentd daemon. Fluentdis an open source log collector that supports many dataoutputs and has a pluggablearchitecture. One popular logging backend isElasticsearch, andKibana as a viewer. At theend of this task, a new log stream will be enabled sending logs to anexample Fluentd / Elasticsearch / Kibana stack.
The Bookinfo sample application is usedas the example application throughout this task.
Before you begin
- Install Istio in your cluster and deploy anapplication. This task assumes that Mixer is setup in a default configuration(
—configDefaultNamespace=istio-system
). If you use a differentvalue, update the configuration and commands in this task to match the value.
Setup Fluentd
In your cluster, you may already have a Fluentd daemon set running,such the add-on describedhereandhere,or something specific to your cluster provider. This is likelyconfigured to send logs to an Elasticsearch system or loggingprovider.
You may use these Fluentd daemons, or any other Fluentd daemon youhave set up, as long as they are listening for forwarded logs, andIstio’s Mixer is able to connect to them. In order for Mixer toconnect to a running Fluentd daemon, you may need to add aservicefor Fluentd. The Fluentd configuration to listen for forwarded logsis:
<source>
type forward
</source>
The full details of connecting Mixer to all possible Fluentdconfigurations is beyond the scope of this task.
Example Fluentd, Elasticsearch, Kibana Stack
For the purposes of this task, you may deploy the example stackprovided. This stack includes Fluentd, Elasticsearch, and Kibana in anon production-ready set ofServicesandDeploymentsall in a newNamespacecalled logging
.
Save the following as logging-stack.yaml
.
# Logging Namespace. All below are a part of this namespace.
apiVersion: v1
kind: Namespace
metadata:
name: logging
---
# Elasticsearch Service
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
namespace: logging
labels:
app: elasticsearch
spec:
ports:
- port: 9200
protocol: TCP
targetPort: db
selector:
app: elasticsearch
---
# Elasticsearch Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: elasticsearch
namespace: logging
labels:
app: elasticsearch
spec:
replicas: 1
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
annotations:
sidecar.istio.io/inject: "false"
spec:
containers:
- image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.1.1
name: elasticsearch
resources:
# need more cpu upon initialization, therefore burstable class
limits:
cpu: 1000m
requests:
cpu: 100m
env:
- name: discovery.type
value: single-node
ports:
- containerPort: 9200
name: db
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: elasticsearch
mountPath: /data
volumes:
- name: elasticsearch
emptyDir: {}
---
# Fluentd Service
apiVersion: v1
kind: Service
metadata:
name: fluentd-es
namespace: logging
labels:
app: fluentd-es
spec:
ports:
- name: fluentd-tcp
port: 24224
protocol: TCP
targetPort: 24224
- name: fluentd-udp
port: 24224
protocol: UDP
targetPort: 24224
selector:
app: fluentd-es
---
# Fluentd Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: fluentd-es
namespace: logging
labels:
app: fluentd-es
spec:
replicas: 1
selector:
matchLabels:
app: fluentd-es
template:
metadata:
labels:
app: fluentd-es
annotations:
sidecar.istio.io/inject: "false"
spec:
containers:
- name: fluentd-es
image: gcr.io/google-containers/fluentd-elasticsearch:v2.0.1
env:
- name: FLUENTD_ARGS
value: --no-supervisor -q
resources:
limits:
memory: 500Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: config-volume
mountPath: /etc/fluent/config.d
terminationGracePeriodSeconds: 30
volumes:
- name: config-volume
configMap:
name: fluentd-es-config
---
# Fluentd ConfigMap, contains config files.
kind: ConfigMap
apiVersion: v1
data:
forward.input.conf: |-
# Takes the messages sent over TCP
<source>
type forward
</source>
output.conf: |-
<match **>
type elasticsearch
log_level info
include_tag_key true
host elasticsearch
port 9200
logstash_format true
# Set the chunk limits.
buffer_chunk_limit 2M
buffer_queue_limit 8
flush_interval 5s
# Never wait longer than 5 minutes between retries.
max_retry_wait 30
# Disable the limit on the number of retries (retry forever).
disable_retry_limit
# Use multiple threads for processing.
num_threads 2
</match>
metadata:
name: fluentd-es-config
namespace: logging
---
# Kibana Service
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: logging
labels:
app: kibana
spec:
ports:
- port: 5601
protocol: TCP
targetPort: ui
selector:
app: kibana
---
# Kibana Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
namespace: logging
labels:
app: kibana
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
annotations:
sidecar.istio.io/inject: "false"
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana-oss:6.1.1
resources:
# need more cpu upon initialization, therefore burstable class
limits:
cpu: 1000m
requests:
cpu: 100m
env:
- name: ELASTICSEARCH_URL
value: http://elasticsearch:9200
ports:
- containerPort: 5601
name: ui
protocol: TCP
---
Create the resources:
$ kubectl apply -f logging-stack.yaml
namespace "logging" created
service "elasticsearch" created
deployment "elasticsearch" created
service "fluentd-es" created
deployment "fluentd-es" created
configmap "fluentd-es-config" created
service "kibana" created
deployment "kibana" created
Configure Istio
Now that there is a running Fluentd daemon, configure Istio with a newlog type, and send those logs to the listening daemon. Apply aYAML file with configuration for the log stream thatIstio will generate and collect automatically:
Zip
$ kubectl apply -f @samples/bookinfo/telemetry/fluentd-istio.yaml@
If you use Istio 1.1.2 or prior, please use the following configuration instead:
Zip
$ kubectl apply -f @samples/bookinfo/telemetry/fluentd-istio-crd.yaml@
Notice that the address: "fluentd-es.logging:24224"
line in thehandler configuration is pointing to the Fluentd daemon we setup in theexample stack.
View the new logs
- Send traffic to the sample application.
For theBookinfosample, visit http://$GATEWAY_URL/productpage
in your web browseror issue the following command:
$ curl http://$GATEWAY_URL/productpage
- In a Kubernetes environment, setup port-forwarding for Kibana byexecuting the following command:
$ kubectl -n logging port-forward $(kubectl -n logging get pod -l app=kibana -o jsonpath='{.items[0].metadata.name}') 5601:5601 &
Leave the command running. Press Ctrl-C to exit when done accessing the Kibana UI.
Navigate to the Kibana UI and click the “Set up index patterns” in the top right.
Use
*
as the index pattern, and click “Next step.”.Select
@timestamp
as the Time Filter field name, and click “Create index pattern.”Now click “Discover” on the left menu, and start exploring the logs generated
Cleanup
- Remove the new telemetry configuration:
Zip
$ kubectl delete -f @samples/bookinfo/telemetry/fluentd-istio.yaml@
If you are using Istio 1.1.2 or prior:
Zip
$ kubectl delete -f @samples/bookinfo/telemetry/fluentd-istio-crd.yaml@
- Remove the example Fluentd, Elasticsearch, Kibana stack:
$ kubectl delete -f logging-stack.yaml
- Remove any
kubectl port-forward
processes that may still be running:
$ killall kubectl
- If you are not planning to explore any follow-on tasks, refer to theBookinfo cleanup instructionsto shutdown the application.
See also
Mixer and the SPOF Myth
Improving availability and reducing latency.
Mixer Adapter Model
Provides an overview of Mixer's plug-in architecture.
Collecting Logs
This task shows you how to configure Istio to collect and customize logs.
Collecting Metrics
This task shows you how to configure Istio to collect and customize metrics.
Collecting Metrics for TCP services
This task shows you how to configure Istio to collect metrics for TCP services.
Getting Envoy's Access Logs
This task shows you how to configure Envoy proxies to print access log to their standard output.