If you have feedback, suggestions, or questions about this post, please reach out to me on Twitter (@data_henrik), Mastodon (@data_henrik@mastodon.social), or LinkedIn.
Henrik's thoughts on life in IT, data and information management, cloud computing, cognitive computing, covering IBM Db2, IBM Cloud, Watson, Amazon Web Services, Microsoft Azure and more.
Showing posts with label applications. Show all posts
Showing posts with label applications. Show all posts
Friday, August 25, 2023
Simple installation: Python driver for IBM Db2 (finally) has wheel support
A quick post about a feature I have been asking for for several years: The Db2 driver for Python, ibm_db, has wheel support now. It means that the Python package distributes a pre-built binary with the Db2 driver (for many Python versions and most users). So far, in order to install the Db2 driver, you needed to have a build environment available. To create small container images, it required a multi-stage approach. Now, I can simply use a slim base image and install the Db2 driver.
Thursday, June 2, 2022
Create a REST API with OpenAPI spec for your database objects
![]() |
Swagger UI for my events API |
Monday, July 26, 2021
How to connect from Python to Db2
One of the recurring questions I have seen is "how to connect from Python to Db2". I have blogged about Python and Db2 a couple times before. Recently, the question has been popping up again more frequently. From my view, the increased security with mandatory SSL/TLS encryption and the use of Python-based Jupyter notebooks for data science and AI / ML projects are the drivers behind that increase. Moreover, there is Db2 on-premises and the Db2 on Cloud / Db2 Warehouse on Cloud plus container-based offerings. Today, I am trying to sort this out and answer that question. Again.... :)
Monday, August 12, 2019
Get some REST with Db2
![]() |
Db2 has a REST API |
Labels:
applications,
bluemix,
cloud,
Conference,
DB2,
developer,
IBM,
ibmcloud,
idug,
IT,
performance
Wednesday, June 12, 2019
After IDUG is before IDUG: Back from Db2 conference in Charlotte, NC
Last week the IDUG NA 2019 was held in Charlotte, NC. It featured 8 parallel tracks dedicated to Db2 for z/OS and Db2 for LUW and emerging technologies. In addition, two tracks for hands-on labs and workshops were offered. Personally, I delivered three presentations, moderated and attended sessions, and was active on Twitter. So what was on? Continue after the snap of Twitter photos from last week.
![]() |
Twitter photo feed from IDUG NA 2019 |
Thursday, October 4, 2018
Db2 Node.js and Python drivers and ORM
I am back from the Db2 Aktuell conference with my talk about Db2 app development and IBM Cloud. One of the many questions I received since then is about where to find the Db2 drivers for Node.js and Python. What type of ORM (Object-Relational Mapping) libraries are supported? Let me briefly answer that here and provide some links.
![]() |
Db2 drivers turning bits into information |
Db2 and Node.js or Python
Db2 supports many programming languages for database application development, a good overview is in the Db2 Knowledge Center. Node.js and Python use the open source drivers for IBM Database Servers that are available on GitHub.- Last year, I blogged about the Python drivers and that IBM actually provides four different ones. The reason is that there are different Python APIs and frameworks, including SQLAlchemy and Django. Both of them feature their own ORM layer.
- For Node.js, a popular framework is Sails with its Waterline ORM. Db2 is a supported database. A popular Node.js API framework is Loopback by the IBM company Strongloop. The Db2 connector can be found in this GitHub repository. Sequelize is a promise-based ORM, the Db2 driver team has started adding support.
Wednesday, June 13, 2018
Securing your Python app with OpenID Connect (OIDC)
![]() |
Authorized access only using OpenID Connect |
Labels:
administration,
analytics,
applications,
best practices,
cloud,
cloudfoundry,
DB2,
IBM,
IT,
Python,
security
Thursday, October 26, 2017
Cloud Foundry Logging Sources Deciphered
Ever deployed a cloud foundry app and ran into errors? I did and still do. My typical reaction is to request the recent app logs and to analyse them for the root cause. The logs contain those strange (if you don't know them) codes, e.g., RTR, STG, APP, PROC, WEB or more. Here is how to decipher them and use them to your advantage.
The codes above denote the Cloud Foundry component emitting the log entry. You can find a list of those component identifiers in the Cloud Foundry documentation or in the IBM Cloud (Bluemix) documentation on Log Sources. There are many component names. I found that remembering the following helps a lot:
![]() |
Happy Cloud Debugging |
The codes above denote the Cloud Foundry component emitting the log entry. You can find a list of those component identifiers in the Cloud Foundry documentation or in the IBM Cloud (Bluemix) documentation on Log Sources. There are many component names. I found that remembering the following helps a lot:
- RTR: This is the router which channels HTTP requests to an app. Here you find information about the request itself including client information.
- STG: The staging phase has details about the deployment and restaging of an app. They help with initial errors.
- APP: If your app writes information to stderr or stdout, then you will find it here. For some apps this could be the well-known "printf debugging".
Friday, September 29, 2017
App Security Scanner: My Cloud Database App is Secure
![]() |
Cloud Application Security |
Wednesday, August 9, 2017
Introduction to Cloud App Security - Part 1
![]() |
Security for Cloud Apps |
Note that this is part 1 of 2. Part 2 is linked at the bottom.
Labels:
applications,
best practices,
bluemix,
cloud,
cloudfoundry,
encryption,
IBM,
ibmcloud,
IT,
security
Introduction to Cloud App Security - Part 2
(This is a continuation from part 1 of this introduction to Cloud App Security.)
Providing a secure app or application is a fundamental requirement. This is especially true in a cloud environment. In my post about “Securing Workloads on the IBM Cloud” I discussed the various layers that realize secure cloud computing. Today, I want to focus solely on apps that are built for deployment in the IBM Cloud. What makes up a secure app? What cloud services help establishing app security? Let’s take a look together.
App Security Services
To focus on the application logic, the functionality and business side, developers can delegate or “outsource” some of security tasks. Here is a non-exhaustive list of services that the IBM Cloud with the Bluemix platform provides. I am going to use the list of security topics from above:
Labels:
applications,
best practices,
bluemix,
cloud,
encryption,
IBM,
ibmcloud,
IT,
security
Monday, February 20, 2017
Write Your Own CLI Plugins for Bluemix Cloud Foundry
![]() |
README for my Plugin |
Tuesday, January 17, 2017
Bluemix: Simplified Deployment of Complex Cloud Foundry Apps
![]() |
Two apps from single manifest |
Cloud Foundry allows multiple apps to be described with a single manifest file. That is, properties for several apps (or services) can be put together. For each app its name and the location where its code is found need to be specified. They are shown in blue in my sample manifest file. Each app can be deployed to a specific machine, identified by the host and domain name. For the example I chose a different approach. It is the relatively new "routes" property. It allows to combine those properties and even add paths information. The routing is highlighted in yellow below. All I needed to do is to execute a simple "cf push" command and the entire application with its multiple pieces got deployed.
Here is the sample manifest.yml file:
# This manifest deploys two applications.
#
# Both use the same host and domain name as defined
# by their respective route(s) property. The first app
# uses the root path, the second the "sub" and
# "lower" paths.
applications:
# The Python app starts here
- name: yourname-myapp
memory: 256M
command: python myapp.py
routes:
- route: yourname-myapp.mybluemix.net
path: ./top/
#
# The Node.js app starts here
#
- name: yourname-myapp-node
routes:
- route: yourname-myapp.mybluemix.net/lower
- route: yourname-myapp.mybluemix.net/sub
path: ./lower/
If you wonder how the entire project looks like, visit https://github.com/data-henrik/Bluemix-ContextPathRouting for the source code and a more detailed description. I put this repository together to showcase Context Path Routing on IBM Bluemix which I will discuss in an upcoming blog post.
Labels:
administration,
applications,
bluemix,
cloud,
cloudfoundry,
ibmcloud,
IT,
nodejs,
Python
Friday, October 28, 2016
Bluemix: How to Register Your Own Service Broker
![]() |
Dashboard from Sample Service |
Friday, February 27, 2015
DB2 Battle: Optimization Profile vs. Statement Concentrator (Part 1)
![]() |
DB2 explain output |
The original question I tried to answer was related to REOPT and a query like this:
select id, s
from betw
where id between ? and ?
Labels:
applications,
best practices,
catalog,
DB2,
diagnostics,
fun,
indexing,
IT,
performance,
Python,
query,
sql,
version 10.5
Wednesday, December 3, 2014
Introduction and resources for migrating from Oracle to DB2
Well, a boring headline for an interesting topic. Originally I had planned to title today's blog entry "Forget about Black Friday and Cyber Monday - Save big by moving to DB2". But then writing this entry has dragged on for some days...
In the past weeks I have been asked several times about how to migrate off Oracle and on to DB2. Let me give you a quick overview of the technical side, for the financial part you have to ask an IBM business partner or an IBM seller. Although you can move to DB2 from database systems like Microsoft SQL Server, MySQL, and others, I will focus on Oracle because of the compatibility features built into DB2.
When moving off Oracle this could be for a SAP system (or other vendor application) or other applications ("non-SAP"). For SAP environments and applications from several other software vendors there is a (kind of) standardized process to migrate a system. The reason is that there are database-specific definitions and feature exploitations. A great example is how SAP is making use of the DB2-only BLU Acceleration to boost performance for analytic environments. Many software vendors provide tools for database migration and related processes or services.
For custom scenarios where the application code is available, a manual migration applies. The traditional barrier to a migration, the (more or less) incompatibility of products, has been torn down by adding compatibility features to DB2. Some of those features come ready to use by any user, some require specific preparation of DB2 because they may impact the traditional handling/"look and feel". The DB2 Knowledge Center has a good introduction and overview into that topic: "DB2 Compatibility Features". If you are comping to DB2 with a background in Oracle then use the Terminology Mapping to discover how products, features, and commands are named in the world of DB2.
From release to release there have been several enhancements to the SQL compatibility with database vendors such as Oracle. An overview by release can be found in the related "What's New" section of each of the recent DB2 releases:
If you prefer a book instead of the DB2 Knowledge Center, then I recommend the IBM Redbook "Oracle to DB2 Conversion Guide: Compatibility Made Easy". It gives an overview of DB2, the tools needed for a migration in a non-SAP environment, and the conversion approach. In the appending you will also find a nice terminology mapping, i.e., explaining how Oracle commands and features are named in the world of DB2.
A key tool for custom migrations is the Database Conversion Workbench (DCW). It is a plugin into the IBM Data Studio, a free tool for database administration, design, and SQL development. The DCW allows to analyze a database schema with respect to DB2 compatibility. The result is a valuable foundation for estimating the overall conversion effort. Once that is done (or not needed), the Database Conversion Workbench helps in the process of moving the database schema, database-side SQL packages, and thereafter the data from another database system to DB2. DCW also includes a package visualizer to help understand package dependencies which simplifies the code conversion. See this page for an overview and more resources around the Database Conversion Workbench.
An important DB2 feature related to the compatibility is the ability to run PL/SQL code. Several administrative PL/SQL packages ship with DB2 which can be found in the "built-in modules" section. Moreover, there are also some PL/SQL packages that can be used "as is" and are available from IBM developerWorks in the database conversion community: See here for the add-on PL/SQL packages.
That's all for now with my overview of resources for the Oracle to DB2 migration. I hope that it provides a good introduction into that (sometimes confusing) topic.
BTW: I have been covering many migration-related topics in my blog. Serge Rielau and now Rick Swagerman have provided many SQL tips for DB2 in their blog.
![]() |
Database Conversion Workbench |
In the past weeks I have been asked several times about how to migrate off Oracle and on to DB2. Let me give you a quick overview of the technical side, for the financial part you have to ask an IBM business partner or an IBM seller. Although you can move to DB2 from database systems like Microsoft SQL Server, MySQL, and others, I will focus on Oracle because of the compatibility features built into DB2.
When moving off Oracle this could be for a SAP system (or other vendor application) or other applications ("non-SAP"). For SAP environments and applications from several other software vendors there is a (kind of) standardized process to migrate a system. The reason is that there are database-specific definitions and feature exploitations. A great example is how SAP is making use of the DB2-only BLU Acceleration to boost performance for analytic environments. Many software vendors provide tools for database migration and related processes or services.
For custom scenarios where the application code is available, a manual migration applies. The traditional barrier to a migration, the (more or less) incompatibility of products, has been torn down by adding compatibility features to DB2. Some of those features come ready to use by any user, some require specific preparation of DB2 because they may impact the traditional handling/"look and feel". The DB2 Knowledge Center has a good introduction and overview into that topic: "DB2 Compatibility Features". If you are comping to DB2 with a background in Oracle then use the Terminology Mapping to discover how products, features, and commands are named in the world of DB2.
From release to release there have been several enhancements to the SQL compatibility with database vendors such as Oracle. An overview by release can be found in the related "What's New" section of each of the recent DB2 releases:
- SQL compatibility enhancements in DB2 9.7
- The added compatibility features in DB2 10.1
- Even more SQL compatibility enhancements in DB2 10.5
- New features and enhancements related to application development in DB2 9.7
- Application development improvements in DB2 10.1
- The related section for DB2 10.5 (only JSON support listed)
If you prefer a book instead of the DB2 Knowledge Center, then I recommend the IBM Redbook "Oracle to DB2 Conversion Guide: Compatibility Made Easy". It gives an overview of DB2, the tools needed for a migration in a non-SAP environment, and the conversion approach. In the appending you will also find a nice terminology mapping, i.e., explaining how Oracle commands and features are named in the world of DB2.
A key tool for custom migrations is the Database Conversion Workbench (DCW). It is a plugin into the IBM Data Studio, a free tool for database administration, design, and SQL development. The DCW allows to analyze a database schema with respect to DB2 compatibility. The result is a valuable foundation for estimating the overall conversion effort. Once that is done (or not needed), the Database Conversion Workbench helps in the process of moving the database schema, database-side SQL packages, and thereafter the data from another database system to DB2. DCW also includes a package visualizer to help understand package dependencies which simplifies the code conversion. See this page for an overview and more resources around the Database Conversion Workbench.
An important DB2 feature related to the compatibility is the ability to run PL/SQL code. Several administrative PL/SQL packages ship with DB2 which can be found in the "built-in modules" section. Moreover, there are also some PL/SQL packages that can be used "as is" and are available from IBM developerWorks in the database conversion community: See here for the add-on PL/SQL packages.
That's all for now with my overview of resources for the Oracle to DB2 migration. I hope that it provides a good introduction into that (sometimes confusing) topic.
BTW: I have been covering many migration-related topics in my blog. Serge Rielau and now Rick Swagerman have provided many SQL tips for DB2 in their blog.
Wednesday, December 4, 2013
DB2: Nothing to hide...
Today, while teaching a DB2 class, I was asked whether it is possible to hide all columns of a table. The rationale behind the question was that then all application developers would be forced to specify the columns during INSERT. Well, my first reaction was that at least one column needs to be visible. How does it look like?
Testing is faster than looking into the manual for IMPLICITLY HIDDEN columns, so here is a quick test:
db2 => create table hideme(id int implicitly hidden, s varchar(60) implicitly hidden)
DB21034E The command was processed as an SQL statement because it was not a
valid Command Line Processor command. During SQL processing it returned:
SQL20422N The statement failed because the table "HLOESER.HIDEME" would only
contain hidden columns. SQLSTATE=428GU
db2 => create table hideme(id int, s varchar(60) implicitly hidden)
DB20000I The SQL command completed successfully.
db2 => insert into hideme values(1)
DB20000I The SQL command completed successfully.
db2 => insert into hideme(id,s) values(2,'Hey')
DB20000I The SQL command completed successfully.
db2 => select * from hideme
ID
-----------
1
2
2 record(s) selected.
db2 => select id,s from hideme
ID S
----------- ------------------------------------------------------------
1 -
2 Hey
2 record(s) selected.
At least one column must be visible as assumed.
Testing is faster than looking into the manual for IMPLICITLY HIDDEN columns, so here is a quick test:
db2 => create table hideme(id int implicitly hidden, s varchar(60) implicitly hidden)
DB21034E The command was processed as an SQL statement because it was not a
valid Command Line Processor command. During SQL processing it returned:
SQL20422N The statement failed because the table "HLOESER.HIDEME" would only
contain hidden columns. SQLSTATE=428GU
db2 => create table hideme(id int, s varchar(60) implicitly hidden)
DB20000I The SQL command completed successfully.
db2 => insert into hideme values(1)
DB20000I The SQL command completed successfully.
db2 => insert into hideme(id,s) values(2,'Hey')
DB20000I The SQL command completed successfully.
db2 => select * from hideme
ID
-----------
1
2
2 record(s) selected.
db2 => select id,s from hideme
ID S
----------- ------------------------------------------------------------
1 -
2 Hey
2 record(s) selected.
At least one column must be visible as assumed.
Monday, October 14, 2013
connect_proc and locales: Connecting the dots for combined fun
Last year in a blog article I had written about some fun I had with different locales and the dayname function in DB2. Last week I showed you links to customize the application environment. Why not combine the two for some added fun? Let me show you how to connect the dots. I am going to show you how to adapt the locale setting in DB2 based on user preferences.
First, we need a simple table to store our user preferences:
create table myschema.users (id int, uname varchar(30), locale char(5));
For our testing purposes, two user entries will do:
insert into myschema.users values(1,'HLOESER','de_DE');
insert into myschema.users values(2,'DB2FENC1','fr_FR');
What we will do is to set up our own procedure as connect_proc in DB2. In that procedure we access our small user table, read out the preference for the locale setting based on the value of the SESSION_USER. Then we set CURRENT LOCALE LC_TIME special register which controls the language and behavior for day name, timestamp format, rounding, truncation and some more.
create or replace procedure hl.my_connect()
reads sql data
language sql
BEGIN
declare loc char(5);
select locale into loc from myschema.users
where uname=SESSION_USER;
set current locale lc_time loc;
END@
After the procedure is created, some testing should be done:
db2 => call my_connect()
Return Status = 0
What also needs to be done is to allow everyone to execute our procedure:
db2 => grant execute on procedure hl.my_connect to public
DB20000I The SQL command completed successfully.
The final step is to update the database configuration and tell DB2 to use our procedure on every successful connection.
db2 => update db cfg using connect_proc hloeser.my_connect
DB20000I The UPDATE DATABASE CONFIGURATION command completed successfully.
To see the effect of our procedure in place we need to connect as user "hloeser" and as "db2fenc1".
Testing as "hloeser":
select dayname(current date) from dual
1
----------------------------------------------------------------------------------------------------
Monday
When we connect as "db2fenc1", the result is different:
values(dayname(current date))
1
----------------------------------------------------------------------------------------------------
lundi
It seems, replacing the connect_proc with our procedure worked. I close with one important advise: Leave one database connection open during testing. If your connect procedure is not working well, you need that connection to reset the database configuration (update db cfg using connect_proc '')...
Update: There was a question whether the connection is needed or why I recommend it: The answer is that connect_proc, once set, can only be updated when connected to the database. Unsetting connect_proc can be done when not connected, but the database needs to be inactive. Thus, having the connection saves time when developing.
First, we need a simple table to store our user preferences:
create table myschema.users (id int, uname varchar(30), locale char(5));
For our testing purposes, two user entries will do:
insert into myschema.users values(1,'HLOESER','de_DE');
insert into myschema.users values(2,'DB2FENC1','fr_FR');
What we will do is to set up our own procedure as connect_proc in DB2. In that procedure we access our small user table, read out the preference for the locale setting based on the value of the SESSION_USER. Then we set CURRENT LOCALE LC_TIME special register which controls the language and behavior for day name, timestamp format, rounding, truncation and some more.
create or replace procedure hl.my_connect()
reads sql data
language sql
BEGIN
declare loc char(5);
select locale into loc from myschema.users
where uname=SESSION_USER;
set current locale lc_time loc;
END@
After the procedure is created, some testing should be done:
db2 => call my_connect()
Return Status = 0
What also needs to be done is to allow everyone to execute our procedure:
db2 => grant execute on procedure hl.my_connect to public
DB20000I The SQL command completed successfully.
The final step is to update the database configuration and tell DB2 to use our procedure on every successful connection.
db2 => update db cfg using connect_proc hloeser.my_connect
DB20000I The UPDATE DATABASE CONFIGURATION command completed successfully.
To see the effect of our procedure in place we need to connect as user "hloeser" and as "db2fenc1".
Testing as "hloeser":
select dayname(current date) from dual
1
----------------------------------------------------------------------------------------------------
Monday
When we connect as "db2fenc1", the result is different:
values(dayname(current date))
1
----------------------------------------------------------------------------------------------------
lundi
It seems, replacing the connect_proc with our procedure worked. I close with one important advise: Leave one database connection open during testing. If your connect procedure is not working well, you need that connection to reset the database configuration (update db cfg using connect_proc '')...
Update: There was a question whether the connection is needed or why I recommend it: The answer is that connect_proc, once set, can only be updated when connected to the database. Unsetting connect_proc can be done when not connected, but the database needs to be inactive. Thus, having the connection saves time when developing.
Labels:
administration,
applications,
DB2,
fun,
Information Center,
IT,
Oracle,
PL/SQL,
sql,
version 10.5
Tuesday, October 8, 2013
I'm not you - Ways to customize a DB2 application environment
I am not you, she is not him. Users differ, applications differ. There are two interesting concepts in DB2 that help to adapt an application environment and application and that are not widely known. One is the connect_proc database configuration parameter, the other is conditional compilation of SQL. Let me give you some ideas of what it is and links to dig deeper into it.
Introduced in DB2 9.7 FP3, the connect procedure let's use configure a stored procedure that is invoked whenever someone (an application) connects to DB2. Thus, the session context can be tailored to the user or application by setting the locale, the path to resolve functions, optimization levels and more. Even though the procedure is not allowed to modify the database it could invoke another routine with an autonomous transaction. That way some information could be logged ("track", "audit") or another event be triggered. The DB2 Information Center has some examples on how to use connect_proc for session customization. Serge Rielau showed how to implement some logging.
Conditional compilation allows to tailor the compiled SQL to the actual environment. You could just use procedure or function code for little or for big endian - making it execute faster, implement your own routine in different ways depending on the DB2 version - use a more efficient implementation with newer DB2 versions, or plug in a lighter version of code - maybe without debug code. The key is to use SQL_CCFLAGS, flags for conditional SQL compilation. The variable can be set at the database or session level and then referenced within the actual SQL code. Special "_IF"/"_ELSEIF"/"_THEN"/"_END" directives do the trick of selecting the code you want. This is similar to shell scripting and programming languages.
In the DB2 system catalog you can find out which flags were set when a routine or trigger was compiled ("whom to blame..."). Of course the currently set value can be retrieved using the CURRENT SQL_CCFLAGS special registry.
Of course it is possible to combine the two. Develop your own library of connect-related actions, audit, and setup routines, then set SQL_CCFLAGS to implement just the mix you need for a specific application environment.
Introduced in DB2 9.7 FP3, the connect procedure let's use configure a stored procedure that is invoked whenever someone (an application) connects to DB2. Thus, the session context can be tailored to the user or application by setting the locale, the path to resolve functions, optimization levels and more. Even though the procedure is not allowed to modify the database it could invoke another routine with an autonomous transaction. That way some information could be logged ("track", "audit") or another event be triggered. The DB2 Information Center has some examples on how to use connect_proc for session customization. Serge Rielau showed how to implement some logging.
Conditional compilation allows to tailor the compiled SQL to the actual environment. You could just use procedure or function code for little or for big endian - making it execute faster, implement your own routine in different ways depending on the DB2 version - use a more efficient implementation with newer DB2 versions, or plug in a lighter version of code - maybe without debug code. The key is to use SQL_CCFLAGS, flags for conditional SQL compilation. The variable can be set at the database or session level and then referenced within the actual SQL code. Special "_IF"/"_ELSEIF"/"_THEN"/"_END" directives do the trick of selecting the code you want. This is similar to shell scripting and programming languages.
In the DB2 system catalog you can find out which flags were set when a routine or trigger was compiled ("whom to blame..."). Of course the currently set value can be retrieved using the CURRENT SQL_CCFLAGS special registry.
Of course it is possible to combine the two. Develop your own library of connect-related actions, audit, and setup routines, then set SQL_CCFLAGS to implement just the mix you need for a specific application environment.
Labels:
administration,
applications,
DB2,
fun,
Information Center,
IT,
Oracle,
PL/SQL,
sql
Tuesday, September 24, 2013
Everything is my IP: Obfuscating a "Good Morning"
Want to impress or scare your co-workers? Or show your boss how to hide Intellectual Property (IP)? Try this with DB2:
db2 => create view hello WRAPPED SQL10050 ablGWmdiWmtmTmdKTmJqTmdKUmteUmduUnZq1mZK2idaWmdaWmdaWndjHBcaGicaGQIwPJVuycUYnzvpYqeM0J9mNKgGspriVQtaGRWxKEiYWqvXK6WNaJSUbhrqa
DB20000I The SQL command completed successfully.
db2 => select * from hello
GREET
------------
Good Morning
1 record(s) selected.
Even though the functionaliy for it was introduced in DB2 9.7 FP2, probably not many have looked at obfuscation of SQL code, i.e., the body of routines, views, triggers, and packages or modules. Basically, if some business or application logic is inside the database, obfuscation might be an option. This could be either because you want to protect your IP or for security reasons. At the time code obfuscation was released for DB2 in 2010 I wrote that it would be possible to abandon spaghetti code as a way of protection.
It is pretty simple to use obfuscation as everything needed is in the built-in module DBMS_DDL. It has a function WRAP and a procedure CREATE_WRAPPED. When WRAP is called with a valid statement as parameter, it returns a string with the DDL statement to create a wrapped, i.e. obfuscated, version of it. The procedure CREATE_WRAPPED is called with a valid DDL statement, transforms it to the wrapped version and directly executes it. WRAP could be called to produce encoded statements to be embedded in text files. The files would be given to customers. CREATE_WRAPPED could be called directly from an application which creates objects inside a database. Customers would have a hard(er) time to reverse engineer the application logic of functions, procedures, or entire packages created in the database.
So instead of creating the view "hello" as shown above, my application could have issued the following statement:
call dbms_ddl.create_wrapped('create view hello (greet) as select ''Good Morning'' from dual')
And now you know the logic that was hidden inside the strange string of my first SQL statement above ("ablGWmdi..."). Try to memorize that pattern and impress your co-workers or boss...
BTW: If you are a hardcore DB2 user and have never used the DUAL table, read here.
Subscribe to:
Posts (Atom)