Sending logs from Flutter apps in real-time using ELK stack & MQTT

Umair Adil
ITNEXT
Published in
8 min readJul 3, 2020

--

Debugging issues from multiple apps:

Have you ever had to debug issues for your apps using logs that were stored in different sources? I had to and it took a lot of time to fetch the logs just to get started with the debugging process. Sometimes I would even wait for days to finally get my requested logs as my user’s devices would be turned off. Most of my time was spent in fetching logs using a push notification request and waiting for them to arrive.

The first step towards solving the issues on your apps is to reduce the time it takes to figure out the issues.

Logs availability has always been the biggest issue for my team from the beginning. In the past, we did try some solutions, but none of them were consistent & reliable.

There will always be exceptions and problems in the production environment but it doesn’t have to be a stressful process to retrieve the logs. If you have the logs, figuring out the issue becomes less of a problem because eventually, you will be able to trace the pattern but on the other hand, if you don’t have the logs at all, figuring out an issue can be a nightmare.

The Right Solution:

The solution for this is to have exception logs stored on your device using any logger, it’s also good to have audit logs & any crash reporting service, but if you have lots of apps, managing the logs itself becomes a hurdle. How can you manage thousands of log entries coming from multiple sources every day? The solution is made easier with ELK Stack.

ELK Stack is a collection of open-source projects that when combined can help you with the above problem.

TL;DR: Elasticsearch is a search and analytics engine. Logstash is a server‑side data processing pipeline that ingests data from multiple sources simultaneously, transforms it, and then sends it to a “stash” like Elasticsearch.

Our Requirements:

We needed logs for specific events from our apps sent to a centralized remote server in real-time, where they could be stored in a file separated by organizations & users. In case of any reported issue, we should be able to just open up the website and search for logs by applying appropriate filters. Once we have traced the logs, we should be able to download the complete log file for auditing. It’s as simple as that.

Here is the simple flow, of how it should work:

Setting up the environment:

To begin with this solution, I had to install the ELK stack on an Azure Windows machine. I will not go into the installation steps as they are very simple. You can find the installation steps here. The part which was tricky for me was to setup configurations. Let’s dive into that.

Setting up MQTT:

In order to send logs in real-time, we went for MQTT pub-sub. We already had an MQTT server setup ready, all we needed to do was to configure broker URL, port, topic & certificate on both elk server and flutter app.

ELK’s plugin ‘LogStash’ provides support for MQTT input, but it doesn’t ship with the MQTT plugin so we had to install it ourselves. This plugin can be downloaded from this link. Once downloaded, it can be installed by performing some additional steps. The last step is the add it to gem file located in the logstash directory:

Our apps will publish logs to a topic on which Logstash plugin has subscribed to. In order to tell logstash to listen to MQTT input, we need to configure it. Here is the configuration we applied on logstash:

This configuration is saved in logstash.conf file. Here we are using input configuration for MQTT, filter for extracting message field from JSON schema, we will talk about JSON schema next. Here the output will be in the form of a file. Each log entry will contain some JSON fields for organization, app & user. Logstash will use these values in order to create a new directory or a file if not already created. The file name will be time formated.

FileBeats Configuration:

Logstash collects & parses logs in files from multiple sources, we also need to send these logs to the Kibana dashboard. We are using Filebeat as an efficient log shipper. In our case, we just need it to send logs from a source path to Kibana. The path that we have provided here is the same as where the logtash is putting logs. Filebeat’s configuration is saved at filebeat.yml file. Here is how we configured it:

Running Services:

Once all configuration is done, we have to run them all from the command line. Following services & plugins must be started:

Elastic Service

./elasticsearch.bat

Kibana

./kibana.bat

Logstash

./logstash.bat -f logstash.conf

Filebeats

./filebeat -c filebeat.yml -e -d "*"

ELK Schema:

In order to show logs on the Kibana dashboard and to be able to apply filters on logged data, ELK requires the log to be embedded in a pre-defined schema. The details of this schema can be found here. We narrowed down the schema to few values, here is a sample of the schema:

{
"user": {
"user.email": "m.umair.adil@gmail.com",
"user.full_name": "umair",
"user.id": "17282738",
"user.hash": "1",
"user.name": "Umair"
},
"message": "{OkHttpClient} {X-XSS-Protection: 1; mode=block} [30 June 2020 04:15:16 PM]",
"@version": "1",
"log.logger": "PLogger",
"host": {
"host.type": "Android",
"host.name": "LG",
"host.architecture": "LG 23",
"host.hostname": "8000",
"host.id": "LG",
"host.ip": "0.0.0.0",
"host.mac": ""
},
"labels": "{}",
"app": {
"app.language": "en-US",
"app.id": "com.example.develop",
"app.name": "Flutter Dev",
"app.version": "0.0.108"
},
"process.thread.name": "DATA_LOG",
"organization": {
"organization.name": "debug",
"organization.id": "BlackBox"
},
"geo": {
"geo.location": "{ \"lon\": 0.0, \"lat\": 0.0 }"
},
"service.name": "Network",
"@timestamp": "2020-06-30T16:27:36.894Z",
"topic": "com.flutter.elk.logs",
"log.level": "INFO"
}

Some of the fields, you see above are auto inserted by Logstah itself. This is because we are using a JSON filter plugin. See this link for more details.

Here “message” field contains the log information & other fields are used for searching & filtering logs. Each log entry will be embedded in this schema before publishing it to MQTT. The logs will be written to file as JSON-delimited string. Thankfully, the process of creating this schema is made simple using a flutter logging plugin, which we are going to discuss next.

Flutter Plugin for logging:

We have created a flutter plugin named ‘flutter_logs’ for logging data on log files and as well as to publish them on the MQTT topic. This plugin can be found here:

This plugin is made specifically to make it easier to ship logs to ELK stack in real-time. Here are some of the major features of this plugin:

  1. Simple MQTT configuration & auto publisher on the topic
  2. Log data withing ELK schema
  3. Log data on local storage in case if complete log trail is needed

In order to add this plugin to the flutter project, simply add this line in pubspec file:

dependencies:
flutter_logs: [LATEST_VERSION]

Next step is to initialize flutter_logs with some configurations:

await FlutterLogs.initLogs(
logLevelsEnabled: [
LogLevel.INFO,
LogLevel.WARNING,
LogLevel.ERROR,
LogLevel.SEVERE
],
timeStampFormat: TimeStampFormat.TIME_FORMAT_READABLE,
directoryStructure: DirectoryStructure.FOR_DATE,
logTypesEnabled: ["Locations","APIs"],
logFileExtension: LogFileExtension.LOG,
logsWriteDirectoryName: "FlutterLogs",
logsExportDirectoryName: "FlutterLogs/Exported"
);

Here we are defining the following things:

  1. logLevelsEnabled: How many levels of log should be stored or published, default is all but you can choose to send only ERROR or SEVERE to save data
  2. timeStampFormat: What format of time should be appended in each log entry
  3. directoryStructure: By default, it provides 3 types, here we have chosen to place logs in a directory for the current date on the device. e.g. \Logs\03102020\
  4. logTypesEnabled: By default, the logger will always log data to hourly files created with respect to the device’s time, but you can also add your own data files for logging specific events. These must be defined in logs configuration.
  5. logFileExtension: You can define your log file’s extension using this option.
  6. logsWriteDirectoryName: In this field, the path must be provided of the storage directory, where you want all of your logs to be stored.
  7. logsExportDirectoryName: In this field, the path must be provided of the storage directory, where you want all of your logs to be exported as a compressed zip file.

To add MQTT configurations, add the following block:

await FlutterLogs.initMQTT(
topic: "YOUR_TOPIC",
brokerUrl: "", //Add URL without schema
certificate: "name_of_your_certificate_file",
port: "8883");

To enable & add ELK schema options, add the following block after initializing logs:

await FlutterLogs.setMetaInfo(
appId: "flutter_logs_example",
appName: "Flutter Logs Demo",
appVersion: "1.0",
language: "en-US",
deviceId: "00012",
environmentId: "7865",
environmentName: "dev",
organizationId: "5767",
userId: "883023-2832-2323",
userName: "umair13adil",
userEmail: "m.umair.adil@gmail.com",
deviceSerial: "YJBKKSNKDNK676",
deviceBrand: "LG",
deviceName: "LG-Y08",
deviceManufacturer: "LG",
deviceModel: "989892BBN",
deviceSdkInt: "26"
);

Note: All of these fields are optional.

Run the Flutter App:

Once we run our app, our logs will start appearing in the Kibana dashboard:

We can easily find the organization id & user-id from these logs and once we have that we can download that logs’ complete file from the server’s storage.

Conclusion:

ELK stack is a powerful tool for logs centralization and even if have multiple apps, we can forward all the logs to a single point, it is capable of handling large amounts of logs data. The time to trace an issue & fetching logs is highly reduced. We look forward to using the machine learning feature provided by ELK stack. There are so many ways we can make it better for our future applications by trying out different features & plugins that the Kibana has to offer.

Here is the sample code of flutter_logs demo:

--

--

Full Stack Developer | AI Enthusiast | Creative Content Creator