Getting Started with Loopback.io on IBMi

Loopback.io is an open-source framework for creating dynamic REST APIs on a backend servers, and now we can run it on IBMi using Node version 4.

What can Loopback.io do for IBMiers?
A first approach to Loopback.io can be a little bit overwhelming because of the vast amount of features and complexity, but the most important things that this framework can do for use are:

  • Map our database with a model and ORM Language (no more sql sentences in our code).
  • It is based on Express.js and it can generate automatic routes to access our data (It doesnt access directly to our database, all the work is done by "middleware").
  • Built-in role-based access controls. We can define what users can use our "read" data routes and what users can use "inserts" or "update".
  • Data-Juggler: This is the part of Loopback that allow us to connect to IBMi and many databases.
  • REST API autodocumentation with Swaggler
  • IBM maintain the framework!
There is also an enterprise version called "StrongLoop" with more features.

Getting Started with IBMi

All  my tests have been done in: V7R2, Latest PTFs and Node Version 4.

There is nothing differenty about setting Loopback in different platforms. 
 I will install globally, so we could use the strongloop commands to create all our projects.


 npm install -g strongloop

There are some packages that will not get installed in PASE:
heapdump
modern-syslog
sqlite3
strong-agent
utf-8-validate@
bufferutil@
strong-debugger

Before the community can solve how to compile those packages, you could try also an install ignoring the optional components:


npm install -g strongloop --no-optional


Next step is create a project using the loopback cli. You will be able to create an empty application, a hello world example or create the squeleton of your app.

Lets just start with a simple "Hello World"


slc loopback RESTAPI



Go to the directory of the project and start the app simple with:

> node .

Discovering your REST API.

Now you can navigate over your new REST API. Loopback support Swagger, which is basically an standard interface to REST APIs which allow humans and computer to understand the API.

You would be able to discover your API in this link:

http://servername:port/explorer



API Explorer will discover inmediatly all the new models and methods created. By example, lets create a new model and see how Loopback create all the methods.


 slc loopback:model



In this case i created a simple model, using memory as datasource. This is just for testing, because the model would not be persistent in the disk, just in memory. If we add data to this model, the data will be removed after restart the server.


Querying Data from IBMi

For 1-tier applications, we need to use an internal connector that will use the "db2.js" object from IBMi. There are another choices in case you want to run Looback 2-tier, but you will need DB2 Connect Driver.

Lets install the connector:

npm install loopback-connector-ibmi --save
The connector will be installed in node_modules folder. Now it is time for some configuration. The first step is to configure a datasource:

projectname/server/datasources.json

 {
  "db": {
    "name": "db",
    "connector": "memory"
  },
  "ibmi-db": {
    "name": "ibmi-db",
    "connector": "ibmi",
    "username": "",
    "password": "",
     "database": "",
     "hostname": "",
     "schema": "my lib",
      "port":   50000
  }
}



Next step is create the model and specify what datasource the model will use.
We can select our favourite file, use dds. I will start simple and use an old DDS file instead of SQL:


 A                                
 A          R CUSTOMER            
 A            NAME          50A   
 A            EMAIL         50A   
 A            SALDO         10S 2 
 A            CUSTOMERID    10S 0 
 A          K CUSTOMERID          

And now, in Loopback, we configure the model, in server/model-config.json file. I will add the complete file, but check how we can add as many datasources and connections we want


 {
  "_meta": {
    "sources": [
      "loopback/common/models",
      "loopback/server/models",
      "../common/models",
      "./models"
    ],
    "mixins": [
      "loopback/common/mixins",
      "loopback/server/mixins",
      "../common/mixins",
      "./mixins"
    ]
  },
  "note": {
    "dataSource": "db"
  },
  "CUSTOMER": {
    "dataSource": "ibmi-db",
    "public": true
  },
  "People": {
    "dataSource": "db",
    "public": true
  }
}


In this file, you can see that the models "note" and "People" are using the datasource "db", which has been declared a in-memory datasource. "customer" model is using the datasource IBMi.

The last step is to create the model in common/models. We could do it also with the CLI command "slc loopback:model", but i will do it manually.

The model can describe the fields, data types, relations, REST methods,ACL,  etc. For simplicity, i will exclude them all and i will define only fields and data types.

In the folder server/models, we will need 2 files:

customer.js


 module.exports = function(CUSTOMER) {

};

and customer.json


 {
  "name": "CUSTOMER",
  "base": "PersistedModel",
  "idInjection": true,
  "options": {
    "validateUpsert": true
  },
  "properties": {
    "CUSTOMERID": {
      "type": "Number",
      "id": true
    },
    "NAME": {
      "type": "string"
    },
    "EMAIL": {
      "type": "string",
      "required": true
    },
    "SALDO": {
      "type": "number",

      "required": true
    }
  },
  "validations": [],
  "relations": {},
  "acls": [],
  "methods": {}
}

The connector "loopback-connector-ibmi" will take care to map this definitions to the database, create a connection pool and send SQL statements. i setup "debug" mode in the loopback-connector-ibmi , so we can see some info in the console.

So, we are done!. Now is just time to restart our loopback app and play with the API explorer.

Now it is time to play with the API Explorer.



TO-DO
The "loopback-connector-ibmi" is uncomplete and need to be tested for all CRUD operations and test connection pools. By Example the "delete" function still not working. It is not ready for production. Actually is under development and im trying to find time to complete it, but everyone is invited to help :)

LoopbackConnector could be used with DataJuggler without the need of LoopBack.


Here the source code.

REST-API for LoopBack and IBMi

LoopBack Connector for IBMi


Configure SSH Logging on IBMi

There are lot of parameters to configure our SSH server, and we should do some hardening to secure itr: Only use SSH-2, Limit user access, configure idle log out time, etc.

The IBMi come with a default configuration file that we should take care about before starting SSH in a production server. There are some good documentation about best SSH practices, but this is not the purpose of my post.

The purpose of my post is to show how to configure SSH-logging, so we can monitor the access to our server. SSH Daemon config is in:


/QOpenSys/QIBM/UserData/SC1/OpenSSH/etc/ssh_config

 By default SSH is Disabled:
 # Logging
 # obsoletes QuietMode and FascistLogging
 # SyslogFacility AUTH
 # LogLevel INFO


If we uncomment the line "SysLogFacility" and "LogLevel", nothing will happend.

The reason is because SSH daemon use  a SysLog facility to forward the ssh events to Syslog daemon But we are lucky, we have syslog in PASE. (It came to my mind my old blog entry  about how to use syslog and rsync: Remote syslog on IBMi )

So our next step is to configure syslog. Silly me, I expend many hours trying to put syslog up and running... until i figured out that the config file of syslog should be  in /QOpenSys/etc/syslog.con instead of  /etc/syslog.conf like on *nix systems. YAK!


touch /QOpenSys/etc/syslog.conf
# Syslog config, general all info events
*.info                /var/log/messages  
# Syslog config for auth events     
auth.info             /var/log/auth       

mkdir /var/log
touch /var/log/messages
touch /var/log/auth

The first line specifies that syslog info messages will go to the file /var/log/messages (it is necesary to create the files first) and all kind of auth.info messages will go to /var/log/auth. We could be more specific just sending ssh messages to "ssh.log"

So next steps are:

1. Stop SSH. Be sure all your sessions are disconnected. By default SSH doesnt disconnected SSH sessions. Even if we stop SSH (ENDTCPVSR *SSHD), the current session will keep alive.

There are two ways to solve this issue: netstat -> 3 -> End Job or check the file sshd.pid and "Kill -9 pid"

But we can also change some settings in sshd.conf to disconnect our SSH clients:


ClientAliveInterval 60
ClientAliveCountMax 3

2. In PASE, start "syslogd"

For debugging, you can start "syslogd -d" to check your configuration. But, take care, i ended with 10 syslogd jobs running on the system and they took almost 100% of the CPU.

It should be fine to run syslogd as a batch process:

SBMJOB CMD(STRQSH CMD('/QOpenSys/usr/sbin/syslogd'))

3. Start SSH

There is a tool to check our syslog config:

logger "Testting...."

cat /var/log/messages
Oct  3 15:32:02 DISIxxxx2 user:notice acl: Testing....


Now, try to open a session from ssh and check the files:


cat /var/log/auth

Oct  3 15:08:01 DISIxxxx2 auth|security:info sshd[19750]: Accepted password for acl from 1xxxxxxx port 54269 ssh2
Oct  3 15:08:01 DISIxxxx2 auth|security:info sshd[19751]: Accepted password for acl from 1xxxxxxx port 54270 ssh2
Oct  3 15:09:37 DISIxxxx2 auth|security:info sshd[19759]: Accepted password for acl from 1xxxxxxx port 54275 ssh2
Oct  3 15:09:37 DISIxxxx2 auth|security:info sshd[19760]: Accepted password for acl from 1xxxxxxx port 54276 ssh2

And the info messages

cat /var/log/messages


Oct  3 15:09:27 DISIxxxx2 syslog:info syslogd: restart
Oct  3 15:09:37 DISIxxxx2 auth|security:info sshd[19759]: Accepted password for acl from 1xxxxxxx port 54275 ssh2
Oct  3 15:09:37 DISIxxxx2 auth|security:info sshd[19760]: Accepted password for acl from 1xxxxxxx port 54276 ssh2
$



IBMi DevOPS: Deploying objects to multiple IBMi servers with Fabric.

Now it is time to create some tools with Fabric.

There are some third-party tools to move data between systems, but we could use Fabric "get" and "put" functions to move data between systems. This is "scp" in the background, so our copies are safe.

To start with a new script, i will use PASE Shell and IBMi command line, so the definition start again like:


from fabric.api import *
IBM_PASE = "/QOpenSys/usr/bin/bsh -c"
IBM_OS = "system"
env.user = "USER"
env.password = "PASS"

The next step is to define what is the source servers and target servers. For me, im interested in move the objects for my development server to the rest of the nodes. But it is easy to define source as "local" (our computer), and deploy a SAVF to all the server.

env.roledefs = {                                                              
    'source': ['dev'],                                           
    'target': ['test1','test2','int1','int2','prod1','prod2'],                                 
}

Next step is to configure what library and temporary SAVF file we are going to use to deploy our packages. I define this into a function that i will call only once for all my servers.

@roles('source','target')
def initsavfile():
    env.shell = IBM_OS
    # Create library
    with settings(warn_only=True):
        result = run("CRTLIB FABRIC")
    
    with settings(warn_only=True):
        run("CRTSAVF FILE(FABRIC/SAVF)")

The decorator "@roles", define in what servers i will run the script. Because probably i created the library before, i use "with settings(warn_only=True):" to monitor errors. Fabric "get" function always works from "remote" to "local", and local is the server when Fabric scripts are running.

Next code is to define what steps i should perform to put a SAVF on my local computer/server. I will send LIBRARY as a parameter and it will use the same SAVF for the whole operation:

def get_file(library):
    env.shell = IBM_OS
    # remove the Local File
    with settings(warn_only=True):
        local("rm /mnt/c/pythonmanagement/SAVF.FILE")
    # Create savf.
    # I got always CPA4067 error from SSH session even with the RPLLE entry
    run("CRTSAVF FILE(FABRIC/FABRIC)")
    command_savlib = "SAVLIB LIB(" + library + ") DEV(*SAVF) SAVF(FABRIC/FABRIC)"
    run(command_savlib)
    # put files on my local computer or server fabric
    get('/QSYS.LIB/FABRIC.LIB/FABRIC.FILE','/mnt/c/pythonmanagement/SAVF.FILE')
    # removing the file
    run("DLTOBJ OBJ(FABRIC/FABRIC) OBJTYPE(*FILE)")

I didnt define yet the servers this code will run. But the "get_file" should run only on the source server. And regarding Fabric "put" operation, it will works from "local" to "remote" server.

def put_file(library):
    env.shell = IBM_OS
    with settings(warn_only=True):
        result = put('/mnt/c/pythonmanagement/SAVF.FILE','/QSYS.LIB/FABRIC.LIB/SAVF.FILE')
    if result.failed:
        print("Deployment of library " + library + " failed")
    else:
        command = "RSTLIB SAVLIB(" + library +") DEV(*SAVF)  SAVF(FABRIC/SAVF)"
        run(command) 
        print("Deployment of library " + library + " succeeded")

The final step is to create or main function to deploy a Library. The function should accept a libray name as a parameter and execute "get_file" and "put_file" synchronously.

@task
def deploy_savf(library):
    env.shell = IBM_OS
    #Get Files from source
    get_file.roles = ('source',)   
    execute(get_file, library)
    # Put files on target
    put_file.roles = ('target',)
    execute(put_file,library)

The decorator @task indicate that this function will be the only "public" function of this scripts.


IBMi DevOPS: Config management with Python and Fabric, part I

You probably had heard about DevOps many times and figured out how to start with IBMi?

What is DevOps?


DevOps represents a change in the IT culture, it is not only about software. DevOps focus on rapid delivery, focus on people and seeks to improve the developer-admin teams. If in our IBMi world, we want to be part of DevOps team, we should be able to implement some automation tools and implement the so called "infraestructure as a code".

I played a bit in windows and linux world with Chef and my colleagues just kept telling me "no, you cant do DevOps on AS400". (I know, i did my best educating them to use IBM instead ... usually when they pronounce that AS400 word, i dont listen).

How can i start using some DevOps tools on IBMi?


There are many tools than can help multidisciplinary teams (windows, linux, db teams) to work in the world of DevOps: Chef, Ansible, Salt, Puppet... If you already know one of them, try to adapt them to the IBMi world.

It takes a lot of time and effort to understand this tools, but you will be able to manage a huge amount of nodes in your infrastructure. Some of this  tools uses agents or SSH sessions and they are based on configuration instead of programming.

Recently i deployed recently 12 IBMi partitions... they were with "initial state": LIC and sofware installed and latest PTF installed. Then, it came to my mind how many repetitive tasks i would need to perform to configure those systemes!

Because all systems are part of same dev/integration/test/production enviroment), imagine adding system values, configuring subsystems,  set secure values (system values, journals, firewall) and deploy software and packages (php, node, tools for developes, apache configuration, etc).

No, i dont want to repeat myself doing that all the time.

Python Fabric: decentralized DevOps.


After checking alternatives, i decided to go to a most simple implementation of DevOps and dont use a complicated "centralized" system.

Everything i need is just my computer ,a repository of code and SSH access to my nodes.  Simplicity  does not mean bad or non-DevOps.

There are a couple of  decentralized  light-DevOps tools that could be very funny to use: Fabric and BundleWrap. Fabric is more about to program in python what to do with tasks. BundleWrap is most about configuration and it is also Python based.

I choice Fabric.

Fabric is a python library and command-line tool for streamlining the use of SSH for application deployment or systems administration tasks. It uses SSH tuneling via paramiko library. it is a very  proactive tool and you can start to automate many tasks in your IBMi in minutes.



 With Fabric, it is simple to write some code on your computer and run it against all your nodes.



 Lets start installing Fabric ( i tested on Python27).

  > pip install fabric

 Next step is to create your directory and start typing tasks! For a "Hello World" i recommend you to use a single node and simple command

Note: On those examples im using simple password, but of course it is better to manage your connections with SSH keys and know_hosts.

Create the file "fabfile.py" in your directory and write this code:

from fabric.api import run

def test():
    run('uname -s')

Now we can use the command fab --list and see the tasks included:

fab --list

Available commands

      test


and run it with
 > fab -H (your node) test

and the output should be like

[myserver] Executing task 'test'
[myserver] run: uname -s
[myserver] Login password for 'andres':
[myserver] out: bsh: /bin/bash: not found
[myserver] out:


Fatal error: run() received nonzero return code 1 while executing!

Requested: uname -s
Executed: /bin/bash -l -c "uname -s"

Aborting.
Disconnecting from myserver... done.

Oops! failed! But why? Well, Fabric is looking for a default Linux bash in "/bin/bash" so we need to tell Fabric that this is an IBMi and i want to use another bash. We are lucky, Fabric provides a settings context manager to change our enviroment.

from fabric.api import run,env

env.shell = "/QOpenSys/usr/bin/bsh -c"
# Change user name, my SSH user
env.user = "ACL"

def test():
    run('uname -s')

> fab -H (your node) test

C:\pythonmanagement>fab -H myserver test
[myserver] Executing task 'test'
[myserver] run: uname -s
[myserver] Login password for 'ACL':
[myserver] Login password for 'ACL':
[myserver] out: OS400
[myserver] out:


Done.
Disconnecting from myserver... done.

So thats all!.

With the command line we told our program what node to connect and what task should perform. Fabric gives and output and we can know at the moment the status of our task.

Now it comes to  my mind a big amount of tasks that we could perform with Fabric:

1. Define host names into our scripts or even roles (development servers, production servers, etc).

2. Define any kind of tasks to perform at a time.

3. Check status of services in our servers and response to them.

4. Deploy a bashscript to all our servers and call it from Fabric. But, what if we want to run IBMi/OS commands? Just simple, change enviromental shell to "system" command

from fabric.api import run,env

IBM_PASE = "/QOpenSys/usr/bin/bsh -c"
IBM_OS = "system"
env.user = "ACL"

def set_hosts():
# Define my hosts.
     env.hosts = ['disibic21', 'disibic22']
def check_lib():
    env.shell = IBM_OS
    try: 
        run('crtlib test1')
    except:
        print('.................      Library exists')

def test():
    env.shell = IBM_PASE
    run('uname -s')

in this code:

1. we define 2 shells: "system" and "bash", so we can call IBMi commands or PASE commands depending of our fabric task.

2. We define 2 hosts to run the task.

3. Add an exception to handle an error.

4. In each task, it is possible to change the environmental settings of the type of BASH.

To run this code > fab set_hosts check_lib test

C:\pythonmanagement>fab set_hosts check_lib test
[disibic21] Executing task 'check_lib'
[disibic21] run: crtlib test1
[disibic21] Login password for 'ACL':
[disibic21] out: CPF2111: Library TEST1 already exists.
[disibic21] out:


Fatal error: run() received nonzero return code 255 while executing!

Requested: crtlib test1
Executed: system "crtlib test1"

Aborting.
.................      Library exists
[disibic22] Executing task 'check_lib'
[disibic22] run: crtlib test1
[disibic22] out: CPF2111: Library TEST1 already exists.
[disibic22] out:


Fatal error: run() received nonzero return code 255 while executing!

Requested: crtlib test1
Executed: system "crtlib test1"

Aborting.
.................      Library exists
[disibic21] Executing task 'test'
[disibic21] run: uname -s
[disibic21] out: OS400
[disibic21] out:

[disibic22] Executing task 'test'
[disibic22] run: uname -s
[disibic22] out: OS400
[disibic22] out:


Done.
Disconnecting from disibic21... done.
Disconnecting from disibic22... done.

As you noticed, im writing 2 tasks in 1 single file. This is because by default Fabric has a single, serial execution method, thought there is an alternative parallel mode.

The default mode perform the following:

  • set_host 
  • check_lib in node1
  • check_lib in node2
  • test in node1
  • test in node2
Fabric is elastic, so you can make a call just to 1 server like

> Fab -H myserver1,myserver2 test


This method is very simplistic for now, but useful to understand Fabric.If your number of servers is huge or tasks need time to perform, it is better to use a parallel approach, but i would not explain it for now. Check in Parallel Fabric

Other things we could do is to get and put files into our file systems (with secure copy) and deploy IFS files, software, PTFs, etc.

There is also a very interesting project for a web-interface for Fabric to deploy code stored in a repository and logs of deployments are stored. This project is called FabricBolt.

This is everything for this post...i added some examples on my github 

your imagination is your limit..remember, it is just python!