
Why you should use Presto for ad hoc analytics

Ad hoc analytics is the process of analyzing data on demand, without predefined queries or schemas. It allows users to explore data freely and discover new insights that may not be captured by regular reports or dashboards. Ad hoc analytics is especially useful for complex and large-scale data sets that require fast and flexible querying.
However, not all data platforms are suitable for ad hoc analytics. Some of them may have limitations in terms of scalability, performance, compatibility, or cost. That’s why you should consider using Presto for your ad hoc analytics needs. Presto is an open source distributed SQL query engine that can query data from various sources, such as relational databases, NoSQL databases, cloud storage, or even streaming data. Presto is designed to handle petabytes of data and thousands of concurrent users with low latency and high throughput.
Here are some of the benefits of using Presto for ad hoc analytics:
- Presto supports a wide range of data sources. You can query data from multiple sources using a single SQL interface, without the need to move or transform the data. This makes it easy to join and analyze data from different systems, such as MySQL, MongoDB, S3, Kafka, etc.
- Presto is fast and scalable. Presto uses a distributed architecture that splits queries into smaller tasks and executes them in parallel across a cluster of nodes. Presto also leverages in-memory processing and columnar storage to optimize performance. Presto can handle complex queries over large data sets in seconds or minutes, compared to hours or days for other systems.
- Presto is compatible with various tools and frameworks. You can use Presto with your favorite BI tools, such as Tableau, Power BI, Looker, etc., to create interactive dashboards and reports. You can also use Presto with popular data science frameworks, such as Spark, TensorFlow, PyTorch, etc., to perform advanced analytics and machine learning.
- Presto is cost-effective and easy to deploy. You can run Presto on any cloud platform or on-premise infrastructure, using commodity hardware or virtual machines. You can also use managed services, such as AWS Athena or Azure Synapse Analytics, that offer Presto as a service. You only pay for the resources you use and you can scale up or down as needed.
In conclusion, Presto is a powerful and versatile query engine that can enable you to perform ad hoc analytics over any data source with high performance and low cost. If you want to learn more about Presto and how to use it for your ad hoc analytics projects, check out the official documentation here.
Now that you know why you should use Presto for ad hoc analytics, you may wonder how to get started with it. The good news is that Presto is very easy to install and configure. You can follow the steps below to set up a Presto cluster on your own:
- Download and extract the Presto server package. You can find the latest version of Presto here. You need to download and extract the package on each node of your cluster.
- Create a configuration directory. You need to create a directory called
etc
under the Presto installation directory. This directory will contain the configuration files for your cluster. - Create a node.properties file. This file specifies the basic properties of each node, such as the node ID, environment name, and data directory. You can use the following template and modify it according to your needs:
node.environment=production
node.id=unique-node-id
node.data-dir=/var/presto/data
- Create a jvm.config file. This file specifies the Java Virtual Machine (JVM) options for each node, such as the memory size, garbage collection settings, and classpath. You can use the following template and modify it according to your needs:
-server
-Xmx16G
-XX:+UseG1GC
-XX:G1HeapRegionSize=32M
-XX:+UseGCOverheadLimit
-XX:+ExplicitGCInvokesConcurrent
-XX:+HeapDumpOnOutOfMemoryError
-XX:OnOutOfMemoryError=kill -9 %p
-cp /usr/lib/presto/lib/*
- Create a config.properties file. This file specifies the coordinator and worker properties for each node, such as the HTTP port, discovery service URL, query max memory, etc. You need to create this file only on the coordinator node. You can use the following template and modify it according to your needs:
coordinator=true
node-scheduler.include-coordinator=false
http-server.http.port=8080
query.max-memory=50GB
query.max-memory-per-node=1GB
discovery-server.enabled=true
discovery.uri=http://example.net:8080
- Create a catalog directory and catalog properties files. This directory contains the properties files for each data source that you want to query with Presto. Each file specifies the connector name, connection URL, credentials, etc. for a data source. You need to create this directory and these files on all nodes of your cluster. You can use the following templates and modify them according to your needs:
# mysql.properties
connector.name=mysql
connection-url=jdbc:mysql://example.net:3306
connection-user=root
connection-password=secret
# mongodb.properties
connector.name=mongodb
mongodb.seeds=example.net:27017
mongodb.credentials=user:password@db
# hive.properties
connector.name=hive-hadoop2
hive.metastore.uri=thrift://example.net:9083
hive.config.resources=/etc/hadoop/conf/core-site.xml,/etc/hadoop/conf/hdfs-site.xml
# kafka.properties
connector.name=kafka
kafka.table-names=topic1,topic2,topic3
kafka.nodes=example.net:9092
# etc.
After creating these configuration files, you are ready to start your Presto cluster. You can use the following commands to start or stop the Presto server on each node:
# Start Presto server
/usr/lib/presto/bin/launcher start
# Stop Presto server
/usr/lib/presto/bin/launcher stop
# Check Presto server status
/usr/lib/presto/bin/launcher status
Once your cluster is up and running, you can use the Presto CLI or any other client tool to connect to it and run queries. You can find more details about how to use Presto in the documentation.
Presto is not only a great tool for ad hoc analytics, but also a powerful platform for
Leave a Reply