A simple script for checking Full Text Index population

The following script will check if the full text index has changed in the last 15 minutes and send and email to warn you if it is not populating. Where ftc_default is your full text catalog in your user database.

if (dateadd(ss,FULLTEXTCATALOGPROPERTY(‘ftc_default’,’PopulateCompletionAge’), ‘1/1/1990’) <= dateadd(minute,-15,getdate()))
begin

DECLARE @sub varchar(100), @Comp varchar(1000)

SELECT @sub=cast( dateadd(ss,FULLTEXTCATALOGPROPERTY(‘ftc_default’,’PopulateCompletionAge’), ‘1/1/1990’) as varchar(100));

USE [msdb]

    select @Comp= concat(‘Check to see if full text index is ok, Last update = ‘, @sub,’  ‘, @@servername);
    EXEC sp_send_dbmail
      @profile_name = ‘databasemail’,
      @recipients = ‘dba@lojack.com’,
      @subject = ‘Full Text Index IM1PRODAGL.IM1.LOCAL’,
      @body = @Comp

end

This script was written because in my case I have an index that I knew should be changing in less than a 15 minute interval  due to updates on the underlying indexed view.

Posted in Sql Server | Leave a comment

ORA-00257: archiver error. Connect internal only, until freed.

Over the years I have learned that common sense goes a long way as a DBA. The other day I was running an ETL process when this error occurred. The following link outlines step to perform to recover from this error.

http://www.dba-oracle.com/sf_ora_00257_archiver_error_connect_internal_only_until_freed.htm

I have always trusted Burleson Consulting web site for great explanation’s of what to check. So I went through the steps outlined on the post. Prior to altering the archivelog destination with LOG_ARCHIVE_DEST_.. = ‘location=/archivelogpath reopen’; I decided to look at my flashback_retention_target and discovered that my setting was 24 hours. Since I was in the middle of an ETL run and really didn’t need 24 hours of flashback recovery my problem was as simple as running.

ALTER SYSTEM SET db_flashback_retention_target = 180 SCOPE=BOTH

This dropped the flashback required to 3 hours and freed space in the flashback area to continue archiving. I guess I could have also probably increased my flashback db_recovery_file_dest_size as well to solve my issue.

My long running ETL process continued on its way, and I was a happy camper.

When the ETL process completed I just reset the retention level to my disaster recovery objectives.

Long story short, Oracle-managed directories like the Fast Recovery Area are great at reclaiming space when needed, and not panicking over stuck archive log processing should be the norm. Although we deep down are always nervous on what we might walk into on any given production day. That’s just being a DBA that doesn’t fly by the seat of his pants!

Posted in Oracle | Leave a comment

AWS lot of Iot a things, I’ve learned

I’ve been experimenting with AWS Iot and thought when JITR (Just In Time Registration) came out that it would get a lot easier.

I setup my environment to use JITR and realized it really wasn’t what I was trying to accomplish. If I have 1000’s of things which really will never connect individually through MQTT why have a policy and certificate for each of them. I really just wanted to have the thing shadows updated, and use the API for reporting. Realizing that a bridge with TLS certs would provide security I decided to try that method of interfacing with AWS services.

So first I setup a virtual machine that runs mosquitto following the instructions here:

https://aws.amazon.com/blogs/iot/how-to-bridge-mosquitto-mqtt-broker-to-aws-iot/

I didn’t want to use EC2 environment so I started following the directions at the heading of “How to Configure the Bridge to AWS IoT”

After following the above document I was seeing the MQTT messages on the AWS console

mosquitto_pub -h localhost -p 1883 -q 1 -d -t localgateway_to_awsiot  -i clientid1 -m “{\”key\”: \”helloFromLocalGateway\”}”

If you subscribe in the console to both_directions you will see the messages on the test link for MQTT client.

So I wanted more open permissions to things shadows so I could update them through the mosquito broker. After playing around for a couple of days I made the following discovery.

If you define the topic like the following in your broker.conf file you can then post to multiple thing shadows:

topic $aws/things/+/shadow/update out 1

With the topic defined above would mean that any $aws/things/<device>/shadow/update will be forwarded to AWS.

The learning cure for me was how to delimit the mosquitto_pub message in the unix environment. Since my mosquitto vm’s OS is debian jessie.

The following syntax works for updating thing shadows on my debian jessie install.

mosquitto_pub  -h localhost -p 1883 -q 1 -d -t \$aws/things/4562077637/shadow/update -i 4562077637 -m ‘{“state”: { “reported”: { “temperature”: 32749, “message”:”Hello World” }}}’

So the only step left was to first create the many things I need without attached certificates or policies, and not having to manually do it through the Console.

So I ran a simple Node.js script with AWS cli  and sdk installed on my local machine

var AWS = require(‘aws-sdk’);

AWS.config.region = ‘us-east-1’;

var crypto = require(‘crypto’);
var endpoint = <your_endpoint>.us-east-1.amazonaws.com”;
var iot = new AWS.Iot();
var iotdata = new AWS.IotData({endpoint: endpoint});

var lineReader = require(‘readline’).createInterface(
{ input: require(‘fs’).createReadStream(‘mylist.txt’)});
{

lineReader.on(‘line’,function(line) {
  var thing = line;
var thingParams = { thingName: thing };

  iot.createThing(thingParams).on(‘success’, function(response) {
    //Thing Created!
  }).on(‘error’, function(response) {
    console.log(response);
  }).send();
});
}

This allowed me to create my things by loading a simple text file with the thing names extracted directly out of the database of devices I wanted to create.

The JavaScript was modified from the following blog post about “Implementing a Serverless AWS Iot Backend with AWS”. I actually followed that post as well for additional learning on different configurations that can be set up with Lambda functions and using Amazon DynamoDB.

https://aws.amazon.com/blogs/compute/implementing-a-serverless-aws-iot-backend-with-aws-lambda-and-amazon-dynamodb/

Now with my mosquitto server NATed to my local host on port 1883 I can send a message on that port which is then forwarded on port 8883 using TLS and certs setup on broker to AWS. Since my policy for the broker is wide open to use any Iot resource I can update any shadow and subscribe to any topic through the broker.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "iot:*"
      ],
      "Resource": [
        "*"
      ]
    }
  ]
}

So the bridge connection certs and private key are not ones that you want someone to get a hold of, with someone obtaining that they could impact your entire repository of things in AWS.

The last experimenting I did was setting up a java publisher to forward messages to my local mosquitto bridge.

On github I modified the following sample code to work with my local mosquitto broker.

https://github.com/tgrall/mqtt-sample-java

The publisher code was changed to publish messages to

MqttClient client = new MqttClient(“tcp://localhost:1883”,MqttClient.generateClientId());

As stated before that port is Nat’ed to the VM with mosquitto installed on it.

Also all the code for certificates being loaded can be eliminated since our MQTT being published on that local port doesn’t use security.

The only other code I changed was to help write the json message.

I referenced jackson-annotations, jackson-core and jackson-databind libraries.

This allowed me to write the json for AWS reported without having to use a bunch of delimiters and such in code.

The messageString was formatted using Jackson as follows:

ObjectMapper mapper = new ObjectMapper();
             
         ObjectNode objectmessage = mapper.createObjectNode();
          objectmessage.put(“temperature”, 10000);
          objectmessage.put(“company”, “acme”);
        
         ObjectNode objectNodeO = mapper.createObjectNode();
         ObjectNode objectNodeI = mapper.createObjectNode();
        
         objectNodeI.putPOJO(“reported”, objectmessage);
         objectNodeO.putPOJO(“state”, objectNodeI);
        
         String messageString = objectNodeO.toString();

This produces the following json string:   

== START PUBLISHER ==
    Message ‘{“state”:{“reported”:{“temperature”:10000,”company”:”acme”}}}’ to ‘topic’
== END PUBLISHER ==

Then you can change the publishing code to each thing you want to update.

     client.publish(“$aws/things/4562077637/shadow/update”, message);

Well that’s about it, I’m still learning but I hope this helps someone else save some time.

Posted in Uncategorized | Leave a comment

Avamar 7.3 RMAN configuration (Part 2)

Previously we looked at how to backup oracle using the Avamar plugin. So lets look at how I currently use the RMAN plugin for Avamar to test my database restores.

Since we will be using a test server to restore to, the easiest way is not to register the instance in RMAN if you are using a catalog. So all the restores will be using the target control file only. In an actual restore to a server with a registered rman instance we would log into the catalog. In this case we would be likely just doing a recover database against the metadata cataloged for a real disaster recovery situation.

The first assumption is that you have installed an oracle instance, and oracle home on your backup test server. The quickest way to do that is run dbca and just create a small database with the same SID as you want to restore from tape. After creating the database shut it down and delete all the data files associated with it. Also navigate to and remove any files under fast_recovery_area/autobackup. If rman finds old backups there it could create a new incarnation of the restored database.

Once you have your environment set up properly you first need to startup the instance in nomount mode to restore the control files from tape. If you have the spfile great, if not you can create or modify a pfile from the dbca instance you created earlier. Usually the only variables that might need tweaking are the undo_tablespace name, and memory_target. The spfile will get restored from tape as well, but if you have any limitations on resources you may not want to use the spfile from the production server. In my case the memory on my backup server is why I usually modify the parameter file for restore tests.

Once you have a valid spfile or pfile you should be able to startup nomount as shown below.

image

 

As explained in my first post on Avamar backups, make sure that your my-avtar-flags.txt has the correct target named which should be your backup server you are restoring too. Also the server should point to your Avamar server, and the path should be the name of the server where that you are pulling the original backup from:

–debug
–pidname=Oracle
–pidnum=1002
–logfile=/home/oracle/logs/backup_prod.log
–vardir=/usr/local/avamar/var
–id=oracle
–bindir=/usr/local/avamar/bin
–ap=secretpassword
–path=/Oracle/viradbprod02
–server=hqavamar01.Lojackone.lojack.com
–expires=30
–target=”viradbbackup”

example above

–server Avamar server

–target server being restored too.

–path original server path in Avamar where backup exists.

The other file taskfile.txt should just have these two lines

–no_of_channels=1

–operation=restore

With all these files correctly adjusted and our database instance in nomount mode we should be able to start a restore by doing two steps.

1. restore control files and mount.

2. restore and recover the database.

So for the first step we can create an rman script that restores the backup from avamar we want. To do this we first go to Avamar and select the control file for the backup we want to restore.

image

If you used the backup scripts I had in my first post your control files will be labeled as above. You can see the name has the SID LJPROD , the database id 421323215 and last the date YYYYMMDD. Given that information we can write a script with the following commands:

image

After saving that script we can restore our control file by running it

source your sid to set environment

. oraenv LJPROD

rman target /

@name_of control_file_restore_script

image

From the above script you can see two control files where restored one under oracledata/SID and one in fast_recovery_area/SID. We can also see that the database changed from nomount to mounted state.

In the mounted state we can create another rman script to restore our datafiles to the newly mounted database.

image

You can see in the above script there are commented out lines in case we didn’t want to restore to the latest scn. There are a few ways we could do an incomplete recovery, but in this case we will attempt a full recovery to the last scn possible.

image

when we hit the recover database step we should see our archivelogs being restored as follows:

image

If we are missing archivelogs RMAN will throw the RMAN-06054, 03002 indicating the missing sequence number and its starting SCN.

image

So now you can restore these archivelogs from either tape or from primary if this was part of a dataguard lag issue.

Running:

RMAN> LIST BACKUP OF ARCHIVELOG ALL;

This will give you a list of all the archivelogs and the last SCN rman finds on tape or file system.

Options:

1.) Find the archivelogs on tape or primary if in a Dataguard configuration. Any missing archivelogs will require you to do an incomplete recovery.

2.) Do an incomplete recovery by SCN or by date or of course you can always do either RECOVER DATABASE UNTIL CANCEL or RECOVER DATABASE UNTIL CANCEL USING BACKUP CONTROLFILE . Followed of course with ALTER DATABASE OPEN RESETLOGS; 

RMAN is an awesome tool, and the Avamar plugin working on demand is really not that bad once you get use to the text file configurations. You should practice restores often. On my critical systems I try to do restore tests a couple times each month.

Posted in Oracle | Leave a comment

DockerToolbox and VirtualBox on Windows 7

I’ve been using Docker on windows 7 for a while now, only to find out one day that Docker Quick Start Terminal, and Kitematic could no longer connect to my VirtualBox host.

I spent a day thinking my ssh connection was the issue. Having spent some time creating the containers I decided the first thing I needed to do was back them up.

Steps that I took are as follows:

From VirtualBox logged in as root I was able to make backups of my containers.

1.) create a snapshot

# docker ps –a

note the CONTAINER ID column value for statement below.

# docker commit –p containerid  containername

repeat this for all the containers you want to snapshot (note: if they are large you may have to do all the steps below to SFTP for each one at a time, since the bootdocker image has limited hard disk space allocated.)

2.)  save snapshot image to tar file

#docker save –o ~/containername.tar containername

repeat this for all the containers you want to backup

move the containername.tar files the docker home directory as follows

#mv ~/containername.tar /home/docker/.

repeat for all the containers you want to retrieve from host

# cd /home/docker

change owner to docker so that we can use that unix account to retrieve

# chown docker : docker *.tar

3.) connect to through sftp. I use WinSCP but any SFTP client should work

sftp : username: docker password: tcuser

usually the ip= 192.168.99.100

transfer the tar files to your local machine somewhere safe

After having these backups off  VirtualBox we can feel safe blowing away our boot2docker image if need be.

After a day of troubleshooting ssh and trying to get kitematic or the QuickStart terminal to create the bootdocker image and successfully ssh in I almost gave up on Windows 7 and using docker. But then I looked at the VirtualBox log file that each attempt was creating and saw the following error:

supR3HardenedErrorV: supR3HardenedScreenImage/LdrLoadDll: rc=VERR_LDRVI_NOT_SIGNED fImage=1 fProtect=0x0 fAccess=0x0 \Device\HarddiskVolume3\Windows\System32\LavasoftTcpService64.dll: Not signed.

another thread discussing the issue with VirtualBox!

https://forums.virtualbox.org/viewtopic.php?f=6&t=69675&start=15

So sure enough NAT was not working on my VirtualBox because of this unsigned LavasoftTcpService64.dll in the tcp path. Well it took me another complete day to get that dll uninstalled. As it was constantly open. But once I removed the LavasoftTcpService64.dll completely from my machine, I allowed Kitematic to create a machine in non elevated privilege mode with success.

But of course it is empty of containers so I had to restore my containers as follows:

connect using SFTP under the docker account again

sftp : username docker password:tcuser

once connected transfer over the container.tar files

then from Virtualbox as root do the following:

# cd /home/docker

# docker load  -i < container.tar

list your images by using

#docker images

Then from Kitematic you can go to the image tab and hit create on each image you want to run.

That’s it. I had my custom containers back. How that one LavasofttcpService64.dll got installed I don’t know. I never installed Ad-Ware Web Companion. But that one dll set me back for three days.

Hope this saves you time if suddenly your docker ssh stops working with a similar issue. If nothing else it has taught me to backup my containers often.

Posted in Uncategorized | Leave a comment

Configuring Sql Server JDBC as a JNDI ServiceMix Datasource

The following steps will outline how I have configured JNDI in  ServiceMix to successfully create a datasource for use within your OSGI packages.

Install Servicemix by unzipping the release from http://servicemix.apache.org/

Verify that your install was successful buy running Servicemix.bat in the bin folder.

image

This should leave you at the karaf@root> command prompt.

run the following commands:

feature:install webconsole

feature:install transaction jndi pax-jdbc-h2 pax-jdbc-pool-dbcp2 pax-jdbc-config

You will now be able to point your browser to http://localhost:8181/system/console

login with username karaf, password karaf

browse to http://yourhost:8181/system/console/features

scroll down to pax-jdbc-mssql and click on install

copy sqljdbc42.jar to the /deploy folder

create a configuration file with the database url you want to configure JNDI for.

example:

osgi.jdbc.driver.class=com.microsoft.sqlserver.jdbc.SQLServerDriver-pool
osgi.jdbc.driver.name = sqljdbc4-pool

databaseName=Calamp_CMS
url=jdbc:sqlserver://im1stageagl.im1.local:1433;multiSubnetFailover=true;integratedSecurity=true

dataSourceName=CALAMP_CMS

save this file as yourdatasource.cfg and store file in the /etc folder.

Note on the above file. you will see that currently I have it configured to use integratedSecurity. This requires that you properly install sqljdbc_auth.dll  in your path. Another assumption is that you login as the windows authentication user when you start servicemix. If you configure servicemix as a service which is very easy to do. Just makes sure it runs under the windows account that is authenticating to your database. The following documentation shows how to configure servicemix as a service.

http://servicemix.apache.org/docs/7.x/users-guide/wrapper.html

return to the karaf command shell.

karaf@root> service:list DataSourceFactory should show you that the sqljdbc4 driver is installed and running.

You should also have pooled and xa drivers listed if you would like to change your configuration file in etc to use those drivers.

image

type:

karaf@root> feature:install jdbc

karaf@root> feature:install jpa

karaf@root> ds-list  (This should now show you the datasourceservice connecting with OK)

image

karaf@root> jndi:names (this should show you your configured JNDI service name datasource.

image

now we have this configured we can reference the JNDI service as follows in Java OSGI packages.

import java.util.logging.Level;
import java.util.logging.Logger;
import javax.naming.InitialContext;
import javax.naming.NamingException;
import javax.sql.DataSource;

public class DataSourceFactory {
    private static final Logger LOG = Logger.getLogger(DataSourceFactory.class.getName());

    public static DataSource createDataSourceByName(String name) {
        try {
           
            LOG.log(Level.FINER, “Finding DataSource:  ” + name);
            return (DataSource)new InitialContext().lookup(“osgi:service/” + name);
           
        }
        catch (NamingException e) {
            LOG.log(Level.FINEST, “DataSource name ” + name + ” not found”, e);
        }
        catch (Exception e) {
            LOG.log(Level.SEVERE, “Can’t create data source ” + name, e);
        }
        return null;
    }

}

The name variable would be your datasource name shown on the previous jdni:names list you had configured.

I hope this saves you some time in getting sql server jdbc connectivity to your OSGI projects.

Posted in Sql Server | Leave a comment

Oracle Uptime got to love it!

Capture

Then the VM host server went down.

Posted in Oracle | Leave a comment