[DOC] Fix broken links in Tempo doc (#4827)

* Fix broken links in Tempo doc

* Update links to agent with AGENT_VERSION

* Fix alias for grafana agent
This commit is contained in:
Kim Nylander
2025-03-11 01:23:03 -04:00
committed by GitHub
parent e84574c400
commit 265188db15
22 changed files with 66 additions and 59 deletions

View File

@ -8,14 +8,14 @@
<a href="https://goreportcard.com/report/github.com/grafana/tempo"><img src="https://goreportcard.com/badge/github.com/grafana/tempo" alt="Go Report Card" /></a> <a href="https://goreportcard.com/report/github.com/grafana/tempo"><img src="https://goreportcard.com/badge/github.com/grafana/tempo" alt="Go Report Card" /></a>
</p> </p>
Grafana Tempo is an open source, easy-to-use, and high-scale distributed tracing backend. Tempo is cost-efficient, requiring only object storage to operate, and is deeply integrated with Grafana, Prometheus, and Loki. Grafana Tempo is an open source, easy-to-use, and high-scale distributed tracing backend. Tempo is cost-efficient, requiring only object storage to operate, and is deeply integrated with Grafana, Prometheus, and Loki.
## Business value of distributed tracing ## Business value of distributed tracing
Distributed tracing helps teams quickly pinpoint performance issues and understand the flow of requests across services. The Traces Drilldown UI simplifies this process by offering a user-friendly interface to view and analyze trace data, making it easier to identify and resolve issues without needing to write complex queries. Distributed tracing helps teams quickly pinpoint performance issues and understand the flow of requests across services. The Traces Drilldown UI simplifies this process by offering a user-friendly interface to view and analyze trace data, making it easier to identify and resolve issues without needing to write complex queries.
Refer to [Use traces to find solutions](https://grafana.com/docs/tempo/latest/introduction/solutions-with-traces/) to learn more about how you can use distributed tracing to investigate and solve issues. Refer to [Use traces to find solutions](https://grafana.com/docs/tempo/latest/introduction/solutions-with-traces/) to learn more about how you can use distributed tracing to investigate and solve issues.
## Traces Drilldown UI: A better way to get value from your tracing data ## Traces Drilldown UI: A better way to get value from your tracing data
We are excited to introduce the [Traces Drilldown](https://github.com/grafana/traces-drilldown) (formerly Explore Traces) app as part of the Grafana Explore suite. This app provides a queryless and intuitive experience for analyzing tracing data, allowing teams to quickly identify performance issues, latency bottlenecks, and errors without needing to write complex queries or use TraceQL. We are excited to introduce the [Traces Drilldown](https://github.com/grafana/traces-drilldown) (formerly Explore Traces) app as part of the Grafana Explore suite. This app provides a queryless and intuitive experience for analyzing tracing data, allowing teams to quickly identify performance issues, latency bottlenecks, and errors without needing to write complex queries or use TraceQL.
@ -29,18 +29,17 @@ Key Features:
![image](https://github.com/user-attachments/assets/991205df-1b27-489f-8ef0-1a05ee158996) ![image](https://github.com/user-attachments/assets/991205df-1b27-489f-8ef0-1a05ee158996)
To learn more see the following links: To learn more see the following links:
- [Traces Drilldown repo](https://github.com/grafana/races-drilldown) - [Traces Drilldown repo](https://github.com/grafana/traces-drilldown)
- [Traces Drilldown documentation](https://grafana.com/docs/grafana/latest/explore/simplified-exploration/traces/) - [Traces Drilldown documentation](https://grafana.com/docs/grafana/latest/explore/simplified-exploration/traces/)
- [Demo video](https://www.youtube.com/watch?v=a3uB1C2oHA4 - [Demo video](https://www.youtube.com/watch?v=a3uB1C2oHA4)
)
## TraceQL ## TraceQL
Tempo implements [TraceQL](https://grafana.com/docs/tempo/latest/traceql/), a traces-first query language inspired by LogQL and PromQL, which enables targeted queries or rich UI-driven analyses. Tempo implements [TraceQL](https://grafana.com/docs/tempo/latest/traceql/), a traces-first query language inspired by LogQL and PromQL, which enables targeted queries or rich UI-driven analyses.
### TraceQL metrics ### TraceQL metrics
[TraceQL metrics](https://grafana.com/docs/tempo/latest/traceql/metrics-queries/) is an experimental feature in Grafana Tempo that creates metrics from traces. Metric queries extend trace queries by applying a function to trace query results. This powerful feature allows for ad hoc aggregation of any existing TraceQL query by any dimension available in your traces, much in the same way that LogQL metric queries create metrics from logs. [TraceQL metrics](https://grafana.com/docs/tempo/latest/traceql/metrics-queries/) is an experimental feature in Grafana Tempo that creates metrics from traces. Metric queries extend trace queries by applying a function to trace query results. This powerful feature allows for ad hoc aggregation of any existing TraceQL query by any dimension available in your traces, much in the same way that LogQL metric queries create metrics from logs.
Tempo is Jaeger, Zipkin, Kafka, OpenCensus, and OpenTelemetry compatible. It ingests batches in any of the mentioned formats, buffers them, and then writes them to Azure, GCS, S3, or local disk. As such, it is robust, cheap, and easy to operate! Tempo is Jaeger, Zipkin, Kafka, OpenCensus, and OpenTelemetry compatible. It ingests batches in any of the mentioned formats, buffers them, and then writes them to Azure, GCS, S3, or local disk. As such, it is robust, cheap, and easy to operate!
@ -56,6 +55,8 @@ Tempo is Jaeger, Zipkin, Kafka, OpenCensus, and OpenTelemetry compatible. It ing
To learn more about Tempo, consult the following documents & talks: To learn more about Tempo, consult the following documents & talks:
- [How to get started with Tmepo with Joe Elliot (video)](https://www.youtube.com/watch?v=zDrA7Ly3ovU)
- [Grafana blog posts about Tempo](https://grafana.com/tags/tempo/)
- [New in Grafana Tempo 2.0: Apache Parquet as the default storage format, support for TraceQL][tempo_20_announce] - [New in Grafana Tempo 2.0: Apache Parquet as the default storage format, support for TraceQL][tempo_20_announce]
- [Get to know TraceQL: A powerful new query language for distributed tracing][traceql-post] - [Get to know TraceQL: A powerful new query language for distributed tracing][traceql-post]

View File

@ -109,7 +109,7 @@ The easiest way to get the trace is to execute a simple curl command to Tempo. T
### Use TraceQL to search for a trace ### Use TraceQL to search for a trace
Alternatively, you can also use [TraceQL](../traceql) to search for the trace that was pushed. Alternatively, you can also use [TraceQL](https://grafana.com/docs/tempo/<TEMPO_VERSION>/traceql/) to search for the trace that was pushed.
You can search by using the unique trace attributes that were set: You can search by using the unique trace attributes that were set:
```bash ```bash

View File

@ -3,7 +3,8 @@ title: Grafana Agent
description: Configure the Grafana Agent to work with Tempo description: Configure the Grafana Agent to work with Tempo
weight: 600 weight: 600
aliases: aliases:
- /docs/tempo/grafana-agent - /docs/tempo/grafana-agent
- ../../grafana-agent # /docs/tempo/latest/grafana-agent
--- ---
# Grafana Agent # Grafana Agent
@ -35,14 +36,14 @@ leverages all the data that's processed in the pipeline.
Grafana Agent is available in two different variants: Grafana Agent is available in two different variants:
* [Static mode](/docs/agent/latest/static): The original Grafana Agent. * [Static mode](/docs/agent/<AGENT_VERSION>/static): The original Grafana Agent.
* [Flow mode](/docs/agent/latest/flow): The new, component-based Grafana Agent. * [Flow mode](/docs/agent/<AGENT_VERSION>/flow): The new, component-based Grafana Agent.
Grafana Agent Flow configuration files are [written in River](/docs/agent/latest/flow/concepts/config-language/). Grafana Agent Flow configuration files are [written in River](/docs/agent/<AGENT_VERSION>/flow/concepts/config-language/).
Static configuration files are [written in YAML](/docs/agent/latest/static/configuration/). Static configuration files are [written in YAML](/docs/agent/<AGENT_VERSION>/static/configuration/).
Examples in this document are for Flow mode. Examples in this document are for Flow mode.
For more information, refer to the [Introduction to Grafana Agent](/docs/agent/latest/about/). For more information, refer to the [Introduction to Grafana Agent](/docs/agent/<AGENT_VERSION>/about/).
## Architecture ## Architecture
@ -50,7 +51,7 @@ The Grafana Agent can be configured to run a set of tracing pipelines to collect
Pipelines are built using OpenTelemetry, Pipelines are built using OpenTelemetry,
and consist of `receivers`, `processors`, and `exporters`. and consist of `receivers`, `processors`, and `exporters`.
The architecture mirrors that of the OTel Collector's [design](https://github.com/open-telemetry/opentelemetry-collector/blob/846b971758c92b833a9efaf742ec5b3e2fbd0c89/docs/design.md). The architecture mirrors that of the OTel Collector's [design](https://github.com/open-telemetry/opentelemetry-collector/blob/846b971758c92b833a9efaf742ec5b3e2fbd0c89/docs/design.md).
See the [configuration reference](/agent/latest/static/configuration/traces-config/) for all available configuration options. See the [configuration reference](/agent/<AGENT_VERSION>/static/configuration/traces-config/) for all available configuration options.
<p align="center"><img src="https://raw.githubusercontent.com/open-telemetry/opentelemetry-collector/846b971758c92b833a9efaf742ec5b3e2fbd0c89/docs/images/design-pipelines.png" alt="Tracing pipeline architecture"></p> <p align="center"><img src="https://raw.githubusercontent.com/open-telemetry/opentelemetry-collector/846b971758c92b833a9efaf742ec5b3e2fbd0c89/docs/images/design-pipelines.png" alt="Tracing pipeline architecture"></p>
@ -75,13 +76,13 @@ The Grafana Agent processes tracing data as it flows through the pipeline to mak
The Agent supports batching of traces. The Agent supports batching of traces.
Batching helps better compress the data, reduces the number of outgoing connections, and is a recommended best practice. Batching helps better compress the data, reduces the number of outgoing connections, and is a recommended best practice.
To configure it, refer to the `batch` block in the [configuration reference](/docs/agent/latest/configuration/traces-config). To configure it, refer to the `batch` block in the [configuration reference](/docs/agent/<AGENT_VERSION>/configuration/traces-config).
#### Attributes manipulation #### Attributes manipulation
The Grafana Agent allows for general manipulation of attributes on spans that pass through this agent. The Grafana Agent allows for general manipulation of attributes on spans that pass through this agent.
A common use may be to add an environment or cluster variable. A common use may be to add an environment or cluster variable.
To configure it, refer to the `attributes` block in the [configuration reference](/docs/agent/latest/configuration/traces-config). To configure it, refer to the `attributes` block in the [configuration reference](/docs/agent/<AGENT_VERSION>/configuration/traces-config).
#### Attaching metadata with Prometheus Service Discovery #### Attaching metadata with Prometheus Service Discovery
@ -113,7 +114,7 @@ All of Prometheus' [various service discovery mechanisms](https://prometheus.io/
This means you can use the same `scrape_configs` between your metrics, logs, and traces to get the same set of labels, This means you can use the same `scrape_configs` between your metrics, logs, and traces to get the same set of labels,
and easily transition between your observability data when moving from your metrics, logs, and traces. and easily transition between your observability data when moving from your metrics, logs, and traces.
Refer to the `scrape_configs` block in the [configuration reference](/docs/agent/latest/configuration/traces-config). Refer to the `scrape_configs` block in the [configuration reference](/docs/agent/<AGENT_VERSION>/configuration/traces-config).
#### Trace discovery through automatic logging #### Trace discovery through automatic logging
@ -156,4 +157,4 @@ Aside from endpoint and authentication, the exporter also provides mechanisms fo
and implements a queue buffering mechanism for transient failures, such as networking issues. and implements a queue buffering mechanism for transient failures, such as networking issues.
To see all available options, To see all available options,
refer to the `remote_write` block in the [Agent configuration reference](/docs/agent/latest/configuration/traces-config). refer to the `remote_write` block in the [Agent configuration reference](/docs/agent/<AGENT_VERSION>/configuration/traces-config).

View File

@ -51,7 +51,7 @@ For more information, refer to [Migrate to Alloy](https://grafana.com/docs/tempo
To configure automatic logging, you need to select your preferred backend and the trace data to log. To configure automatic logging, you need to select your preferred backend and the trace data to log.
To see all the available configuration options, refer to the [configuration reference](https://grafana.com/docs/agent/latest/configuration/traces-config). To see all the available configuration options, refer to the [configuration reference](https://grafana.com/docs/agent/<AGENT_VERSION>/configuration/traces-config).
This simple example logs trace roots to `stdout` and is a good way to get started using automatic logging: This simple example logs trace roots to `stdout` and is a good way to get started using automatic logging:
```yaml ```yaml

View File

@ -47,7 +47,7 @@ traces:
enabled: true enabled: true
``` ```
To see all the available configuration options, refer to the [configuration reference](/docs/agent/latest/configuration/traces-config). To see all the available configuration options, refer to the [configuration reference](/docs/agent/<AGENT_VERSION>/configuration/traces-config).
Metrics are registered in the Agent's default registerer. Metrics are registered in the Agent's default registerer.
Therefore, they are exposed at `/metrics` in the Agent's server port (default `12345`). Therefore, they are exposed at `/metrics` in the Agent's server port (default `12345`).

View File

@ -18,9 +18,9 @@ Probabilistic sampling strategies are easy to implement,
but also run the risk of discarding relevant data that you'll later want. but also run the risk of discarding relevant data that you'll later want.
Tail-based sampling works with Grafana Agent in Flow or static modes. Tail-based sampling works with Grafana Agent in Flow or static modes.
Flow mode configuration files are [written in River](/docs/agent/latest/flow/concepts/config-language). Flow mode configuration files are [written in River](/docs/agent/<AGENT_VERSION>/flow/concepts/config-language).
Static mode configuration files are [written in YAML](/docs/agent/latest/static/configuration). Static mode configuration files are [written in YAML](/docs/agent/<AGENT_VERSION>/static/configuration).
Examples in this document are for Flow mode. You can also use the [Static mode Kubernetes operator](/docs/agent/latest/operator). Examples in this document are for Flow mode. You can also use the [Static mode Kubernetes operator](/docs/agent/<AGENT_VERSION>/operator).
## How tail-based sampling works ## How tail-based sampling works
@ -57,7 +57,7 @@ This overhead increases with the number of Agent instances that share the same t
To start using tail-based sampling, define a sampling policy. To start using tail-based sampling, define a sampling policy.
If you're using a multi-instance deployment of the agent, If you're using a multi-instance deployment of the agent,
add load balancing and specify the resolving mechanism to find other Agents in the setup. add load balancing and specify the resolving mechanism to find other Agents in the setup.
To see all the available configuration options, refer to the [configuration reference](/docs/agent/latest/configuration/traces-config/). To see all the available configuration options, refer to the [configuration reference](/docs/agent/<AGENT_VERSION>/configuration/traces-config/).
{{< admonition type="note">}} {{< admonition type="note">}}
Grafana Alloy provides tooling to convert your Agent Static or Flow configuration files into a format that can be used by Alloy. Grafana Alloy provides tooling to convert your Agent Static or Flow configuration files into a format that can be used by Alloy.
@ -67,10 +67,10 @@ For more information, refer to [Migrate to Alloy](https://grafana.com/docs/tempo
### Example for Grafana Agent Flow ### Example for Grafana Agent Flow
[Grafana Agent Flow](/docs/agent/latest/flow/) is a component-based revision of Grafana Agent with a focus on ease-of-use, debuggability, and ability to adapt to the needs of power users. [Grafana Agent Flow](/docs/agent/<AGENT_VERSION>/flow/) is a component-based revision of Grafana Agent with a focus on ease-of-use, debuggability, and ability to adapt to the needs of power users.
Flow configuration files are written in River instead of YAML. Flow configuration files are written in River instead of YAML.
Grafana Agent Flow uses the [`otelcol.processor.tail_sampling component`](/docs/agent/latest/flow/reference/components/otelcol.processor.tail_sampling/)` for tail-based sampling. Grafana Agent Flow uses the [`otelcol.processor.tail_sampling component`](/docs/agent/<ALLOY_VERSION>/flow/reference/components/otelcol/otelcol.processor.tail_sampling/)` for tail-based sampling.
```river ```river
otelcol.receiver.otlp "otlp_receiver" { otelcol.receiver.otlp "otlp_receiver" {

View File

@ -18,7 +18,7 @@ Alloy is flexible, and you can easily configure it to fit your needs in on-prem,
It's commonly used as a tracing pipeline, offloading traces from the It's commonly used as a tracing pipeline, offloading traces from the
application and forwarding them to a storage backend. application and forwarding them to a storage backend.
Grafana Alloy configuration files are written in the [Alloy configuration syntax](https://grafana.com/docs/alloy/latest/concepts/configuration-syntax/). Grafana Alloy configuration files are written in the [Alloy configuration syntax](https://grafana.com/docs/alloy/<ALLOY_VERSION>/get-started/configuration-syntax/).
For more information, refer to the [Introduction to Grafana Alloy](https://grafana.com/docs/alloy/latest/introduction). For more information, refer to the [Introduction to Grafana Alloy](https://grafana.com/docs/alloy/latest/introduction).
@ -52,13 +52,13 @@ Grafana Alloy processes tracing data as it flows through the pipeline to make th
Alloy supports batching of traces. Alloy supports batching of traces.
Batching helps better compress the data, reduces the number of outgoing connections, and is a recommended best practice. Batching helps better compress the data, reduces the number of outgoing connections, and is a recommended best practice.
To configure it, refer to the `otelcol.processor.batch` block in the [components reference](https://grafana.com/docs/alloy/latest/reference/components/otelcol.processor.batch/). To configure it, refer to the `otelcol.processor.batch` block in the [components reference](https://grafana.com/docs/alloy/<ALLOY_VERSION>/reference/components/otelcol/otelcol.processor.batch/).
#### Attributes manipulation #### Attributes manipulation
Grafana Alloy allows for general manipulation of attributes on spans that pass through it. Grafana Alloy allows for general manipulation of attributes on spans that pass through it.
A common use may be to add an environment or cluster variable. A common use may be to add an environment or cluster variable.
There are several processors that can manipulate attributes, some examples include: the `otelcol.processor.attributes` block in the [component reference](https://grafana.com/docs/alloy/latest/reference/components/otelcol.processor.attributes/) and the `otelcol.processor.transform` block [component reference](https://grafana.com/docs/alloy/latest/reference/components/otelcol.processor.transform/) There are several processors that can manipulate attributes, some examples include: the `otelcol.processor.attributes` block in the [component reference](https://grafana.com/docs/alloy/<ALLOY_VERSION>/reference/components/otelcol/otelcol.processor.attributes/) and the `otelcol.processor.transform` block [component reference](https://grafana.com/docs/alloy/<ALLOY_VERSION>/reference/components/otelcol/otelcol.processor.transform/)
#### Attaching metadata with Prometheus Service Discovery #### Attaching metadata with Prometheus Service Discovery
@ -97,7 +97,7 @@ otelcol.exporter.otlp "default" {
} }
``` ```
Refer to the `otelcol.processor.k8sattributes` block in the [components reference](https://grafana.com/docs/alloy/latest/reference/components/otelcol.processor.k8sattributes/). Refer to the `otelcol.processor.k8sattributes` block in the [components reference](https://grafana.com/docs/alloy/<ALLOY_VERSION>/reference/components/otelcol/otelcol.processor.k8sattributes/).
#### Trace discovery through automatic logging #### Trace discovery through automatic logging
@ -138,4 +138,4 @@ Aside from endpoint and authentication, the exporter also provides mechanisms fo
and implements a queue buffering mechanism for transient failures, such as networking issues. and implements a queue buffering mechanism for transient failures, such as networking issues.
To see all available options, To see all available options,
refer to the `otelcol.exporter.otlp` block in the [Alloy configuration reference](https://grafana.com/docs/alloy/latest/reference/components/otelcol.exporter.otlp/) and the `otelcol.exporter.otlphttp` block in the [Alloy configuration reference](https://grafana.com/docs/alloy/latest/reference/components/otelcol.exporter.otlphttp/). refer to the `otelcol.exporter.otlp` block in the [Alloy configuration reference](https://grafana.com/docs/alloy/<ALLOY_VERSION>/reference/components/otelcol/otelcol.exporter.otlp/) and the `otelcol.exporter.otlphttp` block in the [Alloy configuration reference](https://grafana.com/docs/alloy/<ALLOY_VERSION>/reference/components/otelcol/otelcol.exporter.otlphttp/).

View File

@ -25,7 +25,7 @@ pipeline. This allows for automatically building a mechanism for trace discovery
On top of that, you can also get metrics from traces using a logs source, and On top of that, you can also get metrics from traces using a logs source, and
allow quickly jumping from a log message to the trace view in Grafana. allow quickly jumping from a log message to the trace view in Grafana.
While this approach is useful, it isn't as powerful as TraceQL. While this approach is useful, it isn't as powerful as TraceQL.
If you are here because you know you want to log the If you are here because you know you want to log the
trace ID, to enable jumping from logs to traces, then read on. trace ID, to enable jumping from logs to traces, then read on.
@ -47,7 +47,7 @@ This allows searching by those key-value pairs in Loki.
To configure automatic logging, you need to configure the `otelcol.connector.spanlogs` connector with To configure automatic logging, you need to configure the `otelcol.connector.spanlogs` connector with
appropriate options. appropriate options.
To see all the available configuration options, refer to the `otelcol.connector.spanlogs` [components reference](https://grafana.com/docs/alloy/latest/reference/components/otelcol.connector.spanlogs/). To see all the available configuration options, refer to the `otelcol.connector.spanlogs` [components reference](https://grafana.com/docs/alloy/<ALLOY_VERSION>/reference/components/otelcol/otelcol.connector.spanlogs/).
This simple example logs trace roots before exporting them to the Grafana OTLP gateway, This simple example logs trace roots before exporting them to the Grafana OTLP gateway,
and is a good way to get started using automatic logging: and is a good way to get started using automatic logging:

View File

@ -63,7 +63,7 @@ otelcol.exporter.otlp "default" {
} }
``` ```
To see all the available configuration options, refer to the [component reference](https://grafana.com/docs/alloy/latest/reference/components/otelcol.connector.servicegraph/). To see all the available configuration options, refer to the [component reference](https://grafana.com/docs/alloy/<ALLOY_VERSION>/reference/components/otelcol/otelcol.connector.servicegraph/).
### Grafana ### Grafana

View File

@ -14,9 +14,9 @@ There are a number of ways to lower trace volume, including varying sampling str
Sampling is the process of determining which traces to store (in Tempo or Grafana Cloud Traces) and which to discard. Sampling comes in two different strategy types: head and tail sampling. Sampling is the process of determining which traces to store (in Tempo or Grafana Cloud Traces) and which to discard. Sampling comes in two different strategy types: head and tail sampling.
Sampling functionality exists in both [Grafana Alloy](https://grafana.com/docs/alloy/) and the OpenTelemetry Collector. Alloy can collect, process, and export telemetry signals, with configuration files written in [Alloy configuration syntax](https://grafana.com/docs/alloy/<ALLOY_VERSION>/concepts/configuration-syntax/). Sampling functionality exists in both [Grafana Alloy](https://grafana.com/docs/alloy/) and the OpenTelemetry Collector. Alloy can collect, process, and export telemetry signals, with configuration files written in [Alloy configuration syntax](https://grafana.com/docs/alloy/<ALLOY_VERSION>/get-started/configuration-syntax/).
Refer to [Enable tail sampling](https://grafana.com/docs/tempo/<TEMPO_VERSION>/configuration/grafana-alloy/enable-tail-sampling/) for instructions on how to enable tail sampling. Refer to [Enable tail sampling](https://grafana.com/docs/tempo/<TEMPO_VERSION>/configuration/grafana-alloy/tail-sampling/enable-tail-sampling/) for instructions.
## Head and tail sampling ## Head and tail sampling

View File

@ -12,7 +12,7 @@ Probabilistic sampling strategies are easy to implement,
but also run the risk of discarding relevant data that you'll later want. but also run the risk of discarding relevant data that you'll later want.
Tail sampling works with Grafana Alloy. Tail sampling works with Grafana Alloy.
Alloy configuration files are written in [Alloy configuration syntax](https://grafana.com/docs/alloy/<ALLOY_VERSION>/concepts/configuration-syntax/). Alloy configuration files are written in [Alloy configuration syntax](https://grafana.com/docs/alloy/<ALLOY_VERSION>/get-started/configuration-syntax/).
## Configure tail sampling ## Configure tail sampling
@ -25,7 +25,7 @@ To see all the available configuration options for load balancing, refer to the
### Example for Alloy ### Example for Alloy
Alloy uses the [`otelcol.processor.tail_sampling component`](https://grafana.com/docs/alloy/<ALLOY_VERSION>/reference/components/otelcol.processor.tail_sampling/) for tail sampling. Alloy uses the [`otelcol.processor.tail_sampling component`](https://grafana.com/docs/alloy/<ALLOY_VERSION>/reference/components/otelcol/otelcol.processor.tail_sampling/) for tail sampling.
```alloy ```alloy
otelcol.receiver.otlp "default" { otelcol.receiver.otlp "default" {

View File

@ -77,7 +77,8 @@ memberlist:
### Receiver TLS ### Receiver TLS
Additional receiver configuration can be added to support TLS communication for traces being sent to Tempo. The receiver configuration is pulled in from the Open Telemetry collector, and is [documented upstream here](https://github.com/open-telemetry/opentelemetry-collector/blob/main/receiver/otlpreceiver/config.md#configtls-tlsserversetting). Addition TLS configuration of OTEL components can be found [here](https://github.com/open-telemetry/opentelemetry-collector/tree/main/config/configtls). Additional receiver configuration can be added to support TLS communication for traces being sent to Tempo. The receiver configuration is pulled in from the Open Telemetry collector, and is [documented upstream here](https://github.com/open-telemetry/opentelemetry-collector/blob/main/receiver/otlpreceiver/config.md#configtls-tlsserversetting).
Addition TLS configuration of OTEL components can be found [here](https://github.com/open-telemetry/opentelemetry-collector/tree/main/config/configtls).
An example `tls` block might look like the following: An example `tls` block might look like the following:

View File

@ -27,6 +27,7 @@ refs:
- pattern: /docs/enterprise-traces/ - pattern: /docs/enterprise-traces/
destination: https://grafana.com/docs/enterprise-traces/<ENTERPRISE_TRACES_VERSION>/setup/set-up-get-tenants/ destination: https://grafana.com/docs/enterprise-traces/<ENTERPRISE_TRACES_VERSION>/setup/set-up-get-tenants/
--- ---
<!-- Get started pages are mounted in Grafana Drilldown and in GET. Refer to params.yaml in the website repo. -->
# Get started # Get started

View File

@ -13,6 +13,7 @@ killercoda:
backend: backend:
imageid: ubuntu imageid: ubuntu
--- ---
<!-- Page is excluded from mounting in GET docs. Refer to params.yaml in the website repo. -->
<!-- INTERACTIVE page intro.md START --> <!-- INTERACTIVE page intro.md START -->

View File

@ -7,16 +7,18 @@ aliases:
weight: 300 weight: 300
--- ---
<!-- Page is excluded from mounting in GET docs. Refer to params.yaml in the website repo. -->
# Example setups # Example setups
The following examples show various deployment and configuration options using trace generators so you can get started experimenting with Tempo without an existing application. The following examples show various deployment and configuration options using trace generators so you can get started experimenting with Tempo without an existing application.
For more information about Tempo setup and configuration, see: For more information about Tempo setup and configuration, see:
* [Set up Tempo](../setup) * [Set up Tempo](../../setup/)
* [Tempo configuration](../configuration) * [Tempo configuration](../../configuration/)
If you are interested in instrumentation, refer to [Tempo instrumentation](./instrumentation). If you are interested in instrumentation, refer to [Tempo instrumentation](../instrumentation/).
## Docker Compose ## Docker Compose
@ -27,7 +29,8 @@ Some of the examples include:
- Trace discovery with Loki - Trace discovery with Loki
- Basic Grafana Alloy/OpenTelemetry Setup - Basic Grafana Alloy/OpenTelemetry Setup
- Various Backends (S3/GCS/Azure) - Various Backends (S3/GCS/Azure)
- [K6 with Traces](./docker-example) - [K6 with Traces](../docker-example)
This is a great place to get started with Tempo and learn about various trace discovery flows. This is a great place to get started with Tempo and learn about various trace discovery flows.
## Helm ## Helm
@ -42,10 +45,11 @@ To install Tempo on Kubernetes, use the [Deploy on Kubernetes using Helm](https:
To view an example of a complete microservice-based deployment, this [Jsonnet based example](https://github.com/grafana/tempo/tree/main/example/tk) shows a complete microservice based deployment. To view an example of a complete microservice-based deployment, this [Jsonnet based example](https://github.com/grafana/tempo/tree/main/example/tk) shows a complete microservice based deployment.
There are monolithic mode and microservices examples. There are monolithic mode and microservices examples.
To learn how to set up a Tempo cluster, see [Deploy on Kubernetes with Tanka](../setup/tanka/). To learn how to set up a Tempo cluster, see [Deploy on Kubernetes with Tanka](../../setup/tanka/).
## Introduction to Metrics, Logs and Traces example ## Introduction to Metrics, Logs, Traces, and Profiles example
The [Introduction to Metrics, Logs and Traces in Grafana](https://github.com/grafana/intro-to-mlt) provides a self-contained environment for learning about Mimir, Loki, Tempo, and Grafana. It includes detailed explanations of each component, annotated configurations for each component. The [Introduction to Metrics, Logs, Traces, and Profiles in Grafana](https://github.com/grafana/intro-to-mlt) provides a self-contained environment for learning about Mimir, Loki, Tempo, Pyroscope, and Grafana.
It includes detailed explanations of each component and annotated configurations for each component.
The README.md file has full details on how to quickly download and [start the environment](https://github.com/grafana/intro-to-mlt#running-the-demonstration-environment), including instructions for using Grafana Cloud and the OpenTelemetry Alloy. The README.md file explains how to download and [start the environment](https://github.com/grafana/intro-to-mlt#running-the-demonstration-environment), including instructions for using Grafana Cloud and Grafana Alloy collector.

View File

@ -11,6 +11,7 @@ keywords:
title: Introduction title: Introduction
weight: 120 weight: 120
--- ---
<!-- Introduction pages are mounted in Grafana Drilldown, Cloud Traces, and in GET. Refer to params.yaml in the website repo. -->
# Introduction # Introduction

View File

@ -35,7 +35,7 @@ The most important features and enhancements in Tempo 2.4 are highlighted below.
### Multi-tenant queries ### Multi-tenant queries
Tempo now allows you to query multiple tenants at once. We've made multi-tenant queries compatible with streaming ([first released in v2.2](../v2-2/#get-traceql-results-faster)) so you can get query results as fast as possible. Tempo now allows you to query multiple tenants at once. We've made multi-tenant queries compatible with streaming ([first released in v2.2](../v2-2/#get-traceql-results-faster)) so you can get query results as fast as possible.
To learn more, refer to [Cross-tenant federation](https://grafana.com/docs/tempo/<TEMPO_VERSION>/operations/manage-advanced-systems/cross_tenant_query) and [Enable multi-tenancy](https://grafana.com/docs/tempo/operations/manage-advanced-systems/multitenancy/). [PRs [3262](https://github.com/grafana/tempo/pull/3262), [3087](https://github.com/grafana/tempo/pull/3087)] To learn more, refer to [Cross-tenant federation](https://grafana.com/docs/tempo/<TEMPO_VERSION>/operations/manage-advanced-systems/cross_tenant_query/) and [Enable multi-tenancy](https://grafana.com/docs/tempo/<TEMPO_VERSION>/operations/manage-advanced-systems/multitenancy/). [PRs [3262](https://github.com/grafana/tempo/pull/3262), [3087](https://github.com/grafana/tempo/pull/3087)]
### TraceQL metrics (experimental) ### TraceQL metrics (experimental)

View File

@ -133,7 +133,7 @@ We've changed to an RF1 (Replication Factor 1) pattern for TraceQL metrics as we
TraceQL metrics are still considered experimental. TraceQL metrics are still considered experimental.
We hope to mark them GA soon when we productionize a complete RF1 write-read path. We hope to mark them GA soon when we productionize a complete RF1 write-read path.
[PRs [3628](https://github.com/grafana/tempo/pull/3628), [3691]([https://github.com/grafana/tempo/pull/3691](https://github.com/grafana/tempo/pull/3691)), [3723]([https://github.com/grafana/tempo/pull/3723](https://github.com/grafana/tempo/pull/3723)), [3995]([https://github.com/grafana/tempo/pull/3995](https://github.com/grafana/tempo/pull/3995))] [PRs [3628](https://github.com/grafana/tempo/pull/3628), [3691]([https://github.com/grafana/tempo/pull/3691), [3723](https://github.com/grafana/tempo/pull/3723), [3995](https://github.com/grafana/tempo/pull/3995)]
**For recent data** **For recent data**

View File

@ -10,7 +10,7 @@ aliases:
# Enable multi-tenancy # Enable multi-tenancy
Tempo is a multi-tenant distributed tracing backend. It supports multi-tenancy through the use of a header: `X-Scope-OrgID`. Tempo is a multi-tenant distributed tracing backend. It supports multi-tenancy through the use of a header: `X-Scope-OrgID`.
Refer to [multi-tenancy docs](https://grafana.com/docs/tempo/<TEMPO_VERSION>/operations/manage-advanced-systenms/multitenancy/) for more details. Refer to [multi-tenancy docs](https://grafana.com/docs/tempo/<TEMPO_VERSION>/operations/manage-advanced-systems/multitenancy/) for more details.
This document outlines how to deploy and use multi-tenant Tempo with the Operator. This document outlines how to deploy and use multi-tenant Tempo with the Operator.
## Multi-tenancy without authentication ## Multi-tenancy without authentication

View File

@ -175,7 +175,7 @@ For a complete list of changes, refer to the [Tempo 2.6 CHANGELOG](https://githu
We've changed to an RF1 (Replication Factor 1) pattern for TraceQL metrics as we were unable to hit performance goals for RF3 deduplication. This requires some operational changes to query TraceQL metrics. We've changed to an RF1 (Replication Factor 1) pattern for TraceQL metrics as we were unable to hit performance goals for RF3 deduplication. This requires some operational changes to query TraceQL metrics.
TraceQL metrics are still considered experimental, but we hope to mark them GA soon when we productionize a complete RF1 write-read path. [PRs [3628](https://github.com/grafana/tempo/pull/3628), [3691]([https://github.com/grafana/tempo/pull/3691](https://github.com/grafana/tempo/pull/3691)), [3723]([https://github.com/grafana/tempo/pull/3723](https://github.com/grafana/tempo/pull/3723)), [3995]([https://github.com/grafana/tempo/pull/3995](https://github.com/grafana/tempo/pull/3995))] TraceQL metrics are still considered experimental, but we hope to mark them GA soon when we productionize a complete RF1 write-read path. [PRs [3628](https://github.com/grafana/tempo/pull/3628), [3691]([https://github.com/grafana/tempo/pull/3691), [3723]([https://github.com/grafana/tempo/pull/3723), [3995]([https://github.com/grafana/tempo/pull/3995)]
**For recent data** **For recent data**

View File

@ -11,6 +11,8 @@ keywords:
- TraceQL - TraceQL
--- ---
<!-- TraceQL pages are mounted in GET. Refer to params.yaml in the website repo. -->
# TraceQL # TraceQL
Inspired by PromQL and LogQL, TraceQL is a query language designed for selecting traces in Tempo. Currently, TraceQL query can select traces based on the following: Inspired by PromQL and LogQL, TraceQL is a query language designed for selecting traces in Tempo. Currently, TraceQL query can select traces based on the following:

View File

@ -23,7 +23,7 @@ The default Tempo search reviews the whole trace. TraceQL provides a method for
For a deeper look at TraceQL, read the [TraceQL: A first-of-its-kind query language to accelerate trace analysis in Tempo 2.0](/blog/2022/11/30/traceql-a-first-of-its-kind-query-language-to-accelerate-trace-analysis-in-tempo-2.0/) blog post. For a deeper look at TraceQL, read the [TraceQL: A first-of-its-kind query language to accelerate trace analysis in Tempo 2.0](/blog/2022/11/30/traceql-a-first-of-its-kind-query-language-to-accelerate-trace-analysis-in-tempo-2.0/) blog post.
For examples of query syntax, refer to [Construct a TraceQL query](../traceql#construct-a-traceql-query). For examples of query syntax, refer to [Construct a TraceQL query](https://grafana.com/docs/tempo/<TEMPO_VERSION>/traceql/#construct-a-traceql-query).
{{< vimeo 773194063 >}} {{< vimeo 773194063 >}}
@ -33,8 +33,3 @@ TraceQL will be implemented in phases. The initial iteration of the TraceQL engi
For more information about TraceQLs design, refer to the [TraceQL extensions](https://github.com/grafana/tempo/blob/main/docs/design-proposals/2023-11%20TraceQL%20Extensions.md) abd [TraceQL Concepts](https://github.com/grafana/tempo/blob/main/docs/design-proposals/2022-04%20TraceQL%20Concepts.md) design proposals. For more information about TraceQLs design, refer to the [TraceQL extensions](https://github.com/grafana/tempo/blob/main/docs/design-proposals/2023-11%20TraceQL%20Extensions.md) abd [TraceQL Concepts](https://github.com/grafana/tempo/blob/main/docs/design-proposals/2022-04%20TraceQL%20Concepts.md) design proposals.
### Future work
- Increase OTEL support: Events, Lists, ILS Scope, etc.
- Ancestor and parent structural queries
- Pipeline comparisons