Giriş
Calcite sanırım javacc-maven-plugin kullanıyor. Bu bir parser yaratıyor Açıklaması şöyle. FMPP, FreeMarker söz dizimini kullanıyor
To use FMPP with Apache Calcite, you typically define the templates, metadata, and configuration files in a designated directory or package within the project. The FMPP tool is then invoked, specifying the input files, output directory, and any required configuration. FMPP processes the templates, substitutes the placeholders with the provided metadata, and generates the desired output files.
Optimizers
There are two optimizers provided within Calcite:
HepPlanner - This heuristic planner is a rules based optimizer which attempts to match a number of rules to the query that can improve the performance through a number of different methods. This is similar in function to the RBO provided within Oracle.
VolcanoPlanner - This is a cost based optimizer which iterates through different rules and applies them to the query in different combinations until it can find a plan with the most efficient cost. Not all different plan permutations can be established, the optimizer will stop after a certain number of iterations or if the cost ceases to improve during the runs.
Adapter
Maven
Şu satırı dahil
ederiz.
<dependency>
<groupId>org.apache.calcite</groupId>
<artifactId>calcite-core</artifactId>
<version>1.26.0</version>
</dependency>
<dependency>
<groupId>org.apache.calcite.avatica</groupId>
<artifactId>avatica-core</artifactId>
<version>1.17.0</version>
</dependency>
Avatica is a framework for building database drivers.
RelBuilder Sınıfı - Relational Algebra Builder
config metodu
Örnek
// Build our connection
Connection connection = DriverManager.getConnection("jdbc:calcite:");
// Unwrap our connection using the CalciteConnection
CalciteConnection calciteConnection = connection.unwrap(CalciteConnection.class);
// Get a pointer to our root schema for our Calcite Connection
SchemaPlus rootSchema = calciteConnection.getRootSchema();
// Attach our Postgres Jdbc Datasource to our Root Schema
rootSchema.add("exampleSchema", JdbcSchema.create(rootSchema, "exampleSchema",
dataSource, null, null));
FrameworkConfig config = Frameworks.newConfigBuilder()
.defaultSchema(rootSchema)
.build();
RelBuilder r = RelBuilder.create(config);
equals metodu
İki tane field alır
Örnek
//SELECT * FROM "cats"
// LEFT JOIN "dogs" ON "cats"."petID" = "dogs"."petID";
RelBuilder relBuilder = ...;
relBuilder.scan("petDB", "cats")
.scan("petDB", "dogs").as("dogTableAlias")
.join(JoinRelType.LEFT,
relBuilder.equals(
relBuilder.field(2, 0, "petID"),
relBuilder.field(2, "dogTableAlias", "petID")
)
);
Örnek
Şöyle
yaparız. Burada sanırım iki farklı veri tabanından veri çekilip bunlar birleştiriliyor
import org.apache.calcite.adapter.jdbc.JdbcSchema;
import org.apache.calcite.interpreter.Bindables;
import org.apache.calcite.jdbc.CalciteConnection;
import org.apache.calcite.plan.RelOptTable;
import org.apache.calcite.plan.hep.HepPlanner;
import org.apache.calcite.plan.hep.HepProgram;
import org.apache.calcite.rel.RelHomogeneousShuttle;
import org.apache.calcite.rel.RelNode;
import org.apache.calcite.rel.RelShuttle;
import org.apache.calcite.rel.core.TableScan;
import org.apache.calcite.rel.logical.LogicalTableScan;
import org.apache.calcite.schema.SchemaPlus;
import org.apache.calcite.tools.FrameworkConfig;
import org.apache.calcite.tools.Frameworks;
import org.apache.calcite.tools.RelBuilder;
import org.apache.calcite.tools.RelRunner;
import org.verdictdb.commons.DBTablePrinter;
import javax.sql.DataSource;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
public class CalciteJdbcExample {
private static final String POSTGRESQL_SCHEMA = "PUBLIC";
private static final String MYSQL_SCHEMA = "mysql";
public static void main(String[] args) throws Exception {
// Build our connection
Connection connection = DriverManager.getConnection("jdbc:calcite:");
// Unwrap our connection using the CalciteConnection
CalciteConnection calciteConnection = connection.unwrap(CalciteConnection.class);
// Get a pointer to our root schema for our Calcite Connection
SchemaPlus rootSchema = calciteConnection.getRootSchema();
// Instantiate a data source, this can be autowired in using Spring as well
DataSource postgresDataSource = JdbcSchema.dataSource(
"jdbc:postgresql://localhost/db",
"org.postgresql.Driver", // Change this if you want to use something like MySQL, Oracle, etc.
"postgres", // username
"example" // password
);
// Instantiate a data source, this can be autowired in using Spring as well
DataSource mysqlDataSource = JdbcSchema.dataSource(
"jdbc:mysql://localhost/db",
"com.mysql.jdbc.Driver", // Change this if you want to use something like MySQL, Oracle, etc.
"Username", // username
"Password" // password
);
// Attach our Postgres Jdbc Datasource to our Root Schema
rootSchema.add(POSTGRESQL_SCHEMA, JdbcSchema.create(rootSchema, POSTGRESQL_SCHEMA, postgresDataSource, null, null));
// Attach our MySQL Jdbc Datasource to our Root Schema
rootSchema.add(MYSQL_SCHEMA, JdbcSchema.create(rootSchema, MYSQL_SCHEMA, mysqlDataSource, null, null));
// Build a framework config to attach to our Calcite Planners and Optimizers
FrameworkConfig config = Frameworks.newConfigBuilder()
.defaultSchema(rootSchema)
.build();
RelBuilder rb = RelBuilder.create(config);
RelNode node = rb
// First parameter is the Schema, the second is the table name
.scan("PUBLIC", "TABLE_NAME_IN_POSTGRES")
.scan("mysql", "TABLE_NAME_IN_MYSQL")
// If you want to select from more than one table, you can do so by adding a second scan parameter
.filter(
rb.equals(rb.field("fieldname"), rb.literal("literal"))
)
// These are the fields you want to return from your query
.project(
rb.field("id"),
rb.field("col1"),
rb.field("colb")
)
.build();
HepProgram program = HepProgram.builder().build();
HepPlanner planner = new HepPlanner(program);
planner.setRoot(node);
RelNode optimizedNode = planner.findBestExp();
final RelShuttle shuttle = new RelHomogeneousShuttle() {
@Override public RelNode visit(TableScan scan) {
final RelOptTable table = scan.getTable();
if (scan instanceof LogicalTableScan && Bindables.BindableTableScan.canHandle(table)) {
return Bindables.BindableTableScan.create(scan.getCluster(), table);
}
return super.visit(scan);
}
};
optimizedNode = optimizedNode.accept(shuttle);
final RelRunner runner = connection.unwrap(RelRunner.class);
PreparedStatement ps = runner.prepare(optimizedNode);
ps.execute();
ResultSet resultSet = ps.getResultSet();
DBTablePrinter.printResultSet(resultSet);
}
}