BLOG POSTS
How to Create a Dashboard with OpenSearch

How to Create a Dashboard with OpenSearch

OpenSearch Dashboard creation has become a crucial skill for anyone working with log aggregation, metrics monitoring, or real-time data visualization. Building effective dashboards lets you transform raw data into actionable insights, whether you’re tracking server performance, analyzing user behavior, or monitoring application health. This guide walks through the complete process of setting up OpenSearch with Kibana-style dashboards, covering everything from installation to advanced visualization techniques that’ll make your data actually useful.

How OpenSearch Dashboards Work

OpenSearch Dashboards function as the visualization layer on top of your OpenSearch cluster, similar to how Kibana works with Elasticsearch. The architecture consists of three main components:

  • OpenSearch Engine – Stores and indexes your data
  • OpenSearch Dashboards – Web interface for querying and visualization
  • Data Ingest Pipeline – Tools like Logstash, Beats, or custom scripts

The dashboard communicates with OpenSearch through REST APIs, executing queries written in OpenSearch Query DSL or the simpler Query String syntax. When you create a visualization, you’re essentially building a saved query that gets executed against your indices and renders the results in charts, tables, or maps.

Performance-wise, OpenSearch handles query aggregations directly in the engine, meaning your dashboard responsiveness depends heavily on your cluster configuration and data structure. Well-designed indices with proper field mappings can return complex aggregations in milliseconds, while poorly structured data might take seconds.

Step-by-Step Dashboard Setup

Let’s get OpenSearch and Dashboards running. This setup works great on a VPS with at least 4GB RAM, though production workloads will want more horsepower.

Installing OpenSearch

First, grab the latest OpenSearch release and set up the basic configuration:

wget https://artifacts.opensearch.org/releases/bundle/opensearch/2.11.0/opensearch-2.11.0-linux-x64.tar.gz
tar -xzf opensearch-2.11.0-linux-x64.tar.gz
cd opensearch-2.11.0

# Configure basic settings
cat > config/opensearch.yml << EOF
cluster.name: dashboard-cluster
node.name: node-1
network.host: 0.0.0.0
http.port: 9200
discovery.type: single-node
plugins.security.disabled: true
EOF

# Start OpenSearch
./bin/opensearch

Wait for the cluster to initialize (you'll see "started" in the logs), then verify it's running:

curl -X GET "localhost:9200/_cluster/health?pretty"

Installing OpenSearch Dashboards

In a separate terminal, set up the dashboard interface:

wget https://artifacts.opensearch.org/releases/bundle/opensearch-dashboards/2.11.0/opensearch-dashboards-2.11.0-linux-x64.tar.gz
tar -xzf opensearch-dashboards-2.11.0-linux-x64.tar.gz
cd opensearch-dashboards-2.11.0

# Configure dashboard connection
cat > config/opensearch_dashboards.yml << EOF
server.host: "0.0.0.0"
server.port: 5601
opensearch.hosts: ["http://localhost:9200"]
opensearch.ssl.verificationMode: none
opensearch.security.enabled: false
EOF

# Start the dashboard service
./bin/opensearch-dashboards

Navigate to http://your-server:5601 and you should see the OpenSearch Dashboards welcome screen.

Loading Sample Data

Let's create some realistic data to work with. This script generates web server log entries:

#!/bin/bash
# generate_logs.sh

INDICES=("web-logs-2024.01" "web-logs-2024.02")
METHODS=("GET" "POST" "PUT" "DELETE")
STATUS_CODES=(200 201 404 500 503)
PATHS=("/api/users" "/api/orders" "/login" "/dashboard" "/api/products")

for index in "${INDICES[@]}"; do
  for i in {1..1000}; do
    timestamp=$(date -d "$((RANDOM % 30)) days ago" --iso-8601=seconds)
    method=${METHODS[$RANDOM % ${#METHODS[@]}]}
    status=${STATUS_CODES[$RANDOM % ${#STATUS_CODES[@]}]}
    path=${PATHS[$RANDOM % ${#PATHS[@]}]}
    response_time=$((RANDOM % 2000 + 50))
    ip="192.168.1.$((RANDOM % 255))"
    
    curl -X POST "localhost:9200/$index/_doc" \
      -H "Content-Type: application/json" \
      -d "{
        \"@timestamp\": \"$timestamp\",
        \"method\": \"$method\",
        \"status_code\": $status,
        \"path\": \"$path\",
        \"response_time\": $response_time,
        \"client_ip\": \"$ip\",
        \"bytes_sent\": $((RANDOM % 50000 + 1000))
      }"
  done
done

Run this script to populate your indices with sample data:

chmod +x generate_logs.sh
./generate_logs.sh

Creating Your First Dashboard

Now for the fun part - building visualizations that actually tell a story with your data.

Setting Up Index Patterns

Before creating visualizations, you need to define index patterns that tell OpenSearch Dashboards how to interpret your data:

  • Go to Stack Management > Index Patterns
  • Click "Create index pattern"
  • Enter pattern: web-logs-*
  • Select @timestamp as the time field
  • Save the pattern

OpenSearch will automatically detect field types, but you can customize mappings for better performance:

PUT web-logs-*/_mapping
{
  "properties": {
    "status_code": {"type": "integer"},
    "response_time": {"type": "integer"},
    "method": {"type": "keyword"},
    "path": {"type": "keyword"},
    "client_ip": {"type": "ip"}
  }
}

Building Core Visualizations

Let's create several visualization types that work well together:

1. Response Time Line Chart

  • Navigate to Visualize > Create visualization > Line
  • Select your web-logs-* index pattern
  • Y-axis: Average of response_time
  • X-axis: Date histogram on @timestamp (auto interval)
  • Save as "Response Time Trends"

2. Status Code Distribution

  • Create new visualization > Pie chart
  • Slice size: Count
  • Split slices: Terms aggregation on status_code
  • Save as "HTTP Status Distribution"

3. Top Endpoints Table

  • Create new visualization > Data table
  • Rows: Terms on path.keyword (top 10)
  • Metrics: Count, Average response_time
  • Save as "Popular Endpoints"

Assembling the Dashboard

Now combine these visualizations into a cohesive dashboard:

  • Go to Dashboard > Create new dashboard
  • Click "Add" and select your saved visualizations
  • Arrange panels by dragging and resizing
  • Add a time range picker in the top right
  • Save as "Web Traffic Overview"

Pro tip: Use a 2x2 grid layout with your line chart spanning the full width at the top, and smaller charts below for optimal readability.

Real-World Use Cases and Examples

Here are some dashboard configurations I've found work well in production environments:

Application Performance Monitoring

This dashboard layout works great for monitoring web applications:

Panel Visualization Type Key Metrics Time Range
Traffic Overview Area Chart Request count over time Last 24h
Error Rate Line Chart 4xx/5xx percentage Last 24h
Response Times Heatmap P50, P95, P99 latency Last 4h
Top Errors Data Table Error messages by count Last 1h

Infrastructure Monitoring

For server monitoring, this configuration provides comprehensive coverage:

# Sample Metricbeat configuration for system metrics
metricbeat.modules:
- module: system
  metricsets:
    - cpu
    - memory
    - network
    - diskio
  period: 10s

output.opensearch:
  hosts: ["localhost:9200"]
  index: "metricbeat-%{+yyyy.MM.dd}"

Dashboard panels for this data:

  • CPU Usage - Multi-line chart showing user, system, and idle percentages
  • Memory Utilization - Stacked area chart of used vs available memory
  • Disk I/O - Dual-axis chart with read/write operations and throughput
  • Network Traffic - Line chart of bytes in/out per interface

Security Event Dashboard

Security monitoring requires different visualization approaches:

# Sample security event structure
{
  "@timestamp": "2024-01-15T10:30:00Z",
  "event_type": "authentication_failure",
  "source_ip": "203.0.113.42",
  "user_agent": "Mozilla/5.0...",
  "geo": {
    "country": "US",
    "city": "New York"
  },
  "severity": "medium"
}

Effective security dashboard elements:

  • Geographic Map - Plot suspicious IPs by location
  • Event Timeline - Time-based view of security events by severity
  • Top Attackers - Data table of source IPs with event counts
  • Attack Patterns - Heatmap showing attack types vs time of day

OpenSearch vs Elasticsearch Dashboard Comparison

Since OpenSearch forked from Elasticsearch, the dashboard capabilities are nearly identical, but there are some key differences worth noting:

Feature OpenSearch Dashboards Kibana (Elastic) Notes
Basic Visualizations ✅ Full support ✅ Full support Nearly identical functionality
Query DSL ✅ Compatible ✅ Native OpenSearch maintains compatibility
Plugin Ecosystem ⚠️ Growing ✅ Mature Elastic has more third-party plugins
Machine Learning ✅ Built-in 💰 Commercial OpenSearch includes ML features for free
Alerting ✅ Free 💰 Commercial Basic alerting free in OpenSearch
Performance ⚡ Equivalent ⚡ Equivalent Similar performance characteristics

For most use cases, the choice comes down to licensing and cost rather than technical capabilities. OpenSearch's Apache 2.0 license makes it attractive for commercial use without licensing fees.

Advanced Dashboard Techniques

Custom Queries and Filters

Beyond basic aggregations, you can use complex queries to create more sophisticated visualizations:

# Custom query for calculating error rate percentage
{
  "query": {
    "bool": {
      "filter": [
        {"range": {"@timestamp": {"gte": "now-1h"}}},
        {"terms": {"status_code": [400, 401, 403, 404, 500, 502, 503]}}
      ]
    }
  },
  "aggs": {
    "error_rate": {
      "filters": {
        "filters": {
          "errors": {"terms": {"status_code": [400, 401, 403, 404, 500, 502, 503]}},
          "total": {"match_all": {}}
        }
      }
    }
  }
}

Dashboard Variables and Controls

Add interactive controls to make dashboards more dynamic:

  • Time Range Picker - Let users adjust the time window
  • Filter Dropdowns - Add controls for common filters like environment or service
  • Input Controls - Create text inputs for custom search terms

Example filter control configuration:

# Input control for filtering by specific endpoint
{
  "control_type": "list",
  "field_name": "path.keyword",
  "parent": "",
  "label": "API Endpoint",
  "type": "terms",
  "options": {
    "dynamicOptions": true,
    "multiselect": true,
    "size": 10
  }
}

Performance Optimization and Best Practices

Index Design for Dashboard Performance

Dashboard responsiveness depends heavily on how you structure your indices:

# Optimized index template for time-series data
PUT _index_template/logs-template
{
  "index_patterns": ["logs-*"],
  "template": {
    "settings": {
      "number_of_shards": 1,
      "number_of_replicas": 0,
      "refresh_interval": "30s",
      "index.codec": "best_compression"
    },
    "mappings": {
      "properties": {
        "@timestamp": {"type": "date"},
        "level": {"type": "keyword"},
        "service": {"type": "keyword"},
        "message": {"type": "text", "index": false},
        "response_time": {"type": "long"},
        "user_id": {"type": "keyword"}
      }
    }
  }
}

Key optimization strategies:

  • Use keyword fields for aggregations instead of analyzed text
  • Disable indexing for fields you won't search on
  • Set appropriate refresh intervals - 30s is fine for most dashboards
  • Use date-based indices for time-series data to enable efficient cleanup

Query Performance Tips

Some queries perform much better than others in dashboard contexts:

Avoid Use Instead Performance Gain
Wildcard queries (*term*) Match or term queries 10-100x faster
Script-based sorting Field-based sorting 5-50x faster
Large size parameters Scroll API for big datasets Prevents timeouts
Nested aggregations (>3 levels) Flattened data structure 2-10x faster

Common Issues and Troubleshooting

Dashboard Loading Problems

The most common issues I've encountered and their solutions:

Slow Dashboard Loading

  • Check query performance with the Profile API
  • Reduce time range for initial testing
  • Add more specific filters to reduce data volume
  • Consider using data sampling for large datasets
# Profile a slow query to identify bottlenecks
GET logs-*/_search
{
  "profile": true,
  "query": {"match_all": {}},
  "aggs": {
    "slow_agg": {
      "terms": {"field": "some_field.keyword", "size": 1000}
    }
  }
}

Memory Issues

If OpenSearch runs out of memory during dashboard queries:

# Adjust JVM heap size
export OPENSEARCH_JAVA_OPTS="-Xms2g -Xmx2g"

# Configure circuit breakers
PUT _cluster/settings
{
  "persistent": {
    "indices.breaker.fielddata.limit": "30%",
    "indices.breaker.request.limit": "20%"
  }
}

Visualization Display Issues

When charts don't display expected data:

  • Check field mappings - Ensure numeric fields aren't mapped as text
  • Verify time field - Make sure @timestamp is properly formatted
  • Inspect index patterns - Refresh field lists after mapping changes
  • Review aggregation limits - Default bucket limits might hide data

Security Considerations

While this guide disabled security for simplicity, production deployments need proper authentication and authorization:

# Enable security plugin
plugins.security.disabled: false
plugins.security.ssl.transport.enabled: true
plugins.security.ssl.http.enabled: true

# Create dashboard user with limited permissions
PUT _plugins/_security/api/roles/dashboard_user
{
  "cluster_permissions": ["cluster_composite_ops"],
  "index_permissions": [{
    "index_patterns": ["logs-*", "metrics-*"],
    "allowed_actions": ["read", "indices:monitor/stats"]
  }]
}

Additional security best practices:

  • Use HTTPS for dashboard access
  • Implement role-based access control
  • Audit dashboard access and query patterns
  • Regularly update OpenSearch and Dashboard versions

For production deployments requiring dedicated resources, consider using dedicated servers to ensure consistent performance under high query loads. OpenSearch clusters benefit significantly from dedicated CPU and memory resources, especially when handling multiple concurrent dashboard users.

Remember that effective dashboards evolve with your monitoring needs. Start simple, measure what matters to your organization, and gradually add complexity as you identify patterns and pain points in your data. The official OpenSearch documentation provides additional details on advanced configuration options and API references for deeper customization.



This article incorporates information and material from various online sources. We acknowledge and appreciate the work of all original authors, publishers, and websites. While every effort has been made to appropriately credit the source material, any unintentional oversight or omission does not constitute a copyright infringement. All trademarks, logos, and images mentioned are the property of their respective owners. If you believe that any content used in this article infringes upon your copyright, please contact us immediately for review and prompt action.

This article is intended for informational and educational purposes only and does not infringe on the rights of the copyright owners. If any copyrighted material has been used without proper credit or in violation of copyright laws, it is unintentional and we will rectify it promptly upon notification. Please note that the republishing, redistribution, or reproduction of part or all of the contents in any form is prohibited without express written permission from the author and website owner. For permissions or further inquiries, please contact us.

Leave a reply

Your email address will not be published. Required fields are marked