- 17 Aug 2022
- 3 Minutes to read
- Updated on 17 Aug 2022
- 3 Minutes to read
Once messages are being received, you may want to explore a bit. The following describes each page’s purpose and function. Please note that individual permissions will affect which pages are visible to users.
Streams operate as a form of tagging for incoming messages. Streams route messages into categories in real-time, and team rules instruct Graylog to route messages into the appropriate stream.
Streams are used to route data for storage into an index. They are also used to control access to data, and route messages for parsing, enrichment, and other modification. Streams then determine which messages to archive.
The Graylog Search page is the interface used to search logs directly. Graylog uses a simplified syntax, very similar to Lucene. Relative or absolute time ranges are configurable from drop down menus. Searches may be saved or visualized as dashboard widgets that may be added directly to dashboards from within the search screen.
Users may configure their own views and may choose to see either summary or complete data from event messages.
For additional detail, please see Searching.
Graylog Dashboards are visualizations or summaries of information contained in log events. Each dashboard is populated by one or more widgets. Widgets visualize or summarize event log data with data derived from field values such as counts, averages, or totals. Users can create indicators, charts, graphs, and maps to visualize the data.
Dashboard widgets and dashboard layouts are configurable. Graylog's role-based access controls dashboard access. Users can import and export dashboards via content packs.
For additional detail, please see Dashboards.
Alerts are comprised of two related elements, alert conditions, and alert notifications.
For additional detail, please see Alerts.
The Overview page displays information related to the administration of the Graylog instance and contains information on system notifications, system job status, ingestion rates, Elasticsearch cluster health, indexer failures, time configuration, and system event messages.
The Configuration page allows users to set options or variables related to searches, message processors, and plugins.
The Nodes page contains summary status information for each Graylog node. Detailed health information and metrics are available from the buttons displayed on this page.
Use outputs to define methods of forwarding data to remote systems, including port, protocol, and any other required information. Out of the box, Graylog supports STDOUT and GELF outputs, but users may write their own; more are available in the Graylog Marketplace.
Use the Authentication page to configure Graylog’s authentication providers and manage the active users of this Graylog cluster. Graylog supports LDAP or Active Directory for both authentication and authorization.
Content packs accelerate the set-up process for a specific data source. A content pack can include inputs/extractors, streams, dashboards, alerts, and pipeline processors.
Users can export any program element created within Graylog as content packs for use on other systems. These may be kept private by the author, for use in quick deployment of new nodes internally, or shared with the community via the Graylog Marketplace. For example, users can create custom inputs, streams, dashboards, and alerts to support a security use case. Users can then export the content pack and import it on a newly installed Graylog instance to save configuration time and effort.
Users may download content packs created and shared by other users via the Graylog Marketplace. User-created content packs are not supported by Graylog, but instead by their authors.
List of Elements Supported in Content Packs
- Grok Patterns
- Lookup Tables
- Lookup Caches
- Lookup Data Adapters
An Index is the basic unit of storage for data in Elasticsearch. Index sets provide configuration for retention, sharding, and replication of the stored data.
Values, like retention and rotation strategy, are set on a per-index basis, so different data may be subjected to different handling rules.
For more details, please see Index model.
Graylog created the Sidecar agent to manage fleets of log shippers, like Beats or NXLog. These log shippers are used to collect OS logs from Linux and Windows servers. Log shippers are often the simplest way to read logs written locally to a flat file, and send them to a centralized log management solution. Graylog supports management of any log shipper as a backend.
For more details, please see Graylog Sidecar.
Graylog’s Processing Pipelines enable the user to run a rule, or a series of rules, against a specific type of event. Tied to streams, pipelines allow routing, blacklisting, modification, and enrichment of messages as they flow through Graylog. If you want to parse, change, convert, add to, delete from, or drop a message, pipelines are the place to do it.
For more details, please see Processing Pipelines.