This project has moved. For the latest updates, please go here.

Problem with Initializing "SignalGroupSelectionCheckedListBox" in "openVisN"

Aug 29, 2013 at 6:31 PM
Edited Aug 29, 2013 at 6:36 PM
Hello, there

The openVisN demo shown during the user forum was really impressive. We would like to try it on our own and hopefully further use it to extract data and etc. I managed to build openVisN project with no problem. However, when I run it, no device or signal is shown on the left top. I checked the code, and it seems to me that all the signals shown on the checkedlistBox(chkAllSignals) are created in "initialize(SubscriptionFramework framework)" in the SignalGroupSelectionCheckedListBox class. But I cannot find any SubscriptionFramework object initialized anywhere in openVisN project. So how does openVisN get the information about the devices&relevent signals? Could someone please shed some light on this?

Thanks a lot!

Zoe
Aug 29, 2013 at 7:35 PM
Edited Aug 30, 2013 at 3:00 PM
Answer by myself. In the function of clicking "Connect" button, a VisualizationFramework object(visualizationFramework1) is started(Start()). And in the Start() function of VisualizationFramework class, a SubscriptionFramework object is defined and started.
The reason that the CheckedListBox is empty is because in the code of the FrmMain.Designer.cs, the UseNetworkHistorian value of visualizationFramework1 is set to "false". Changing it from "false" to "true" can enable it to get settings using server ip, port number and historian database. And devices information shows in the CheckedListBox.
Developer
Sep 3, 2013 at 3:35 AM
openVisN is a work in process. Some of the things that we do in openVisN is a workaround to get a quick demo application. The idea behind the openVisN framework is to allow developers to quickly implement a custom visualization application from Visual Studio Designer. Simply include openVisN.WinForms.dll to get your snap-in components. Everything can be configured from the GUI, as you see from FrmMain.Designer.cs in the openVisN project.

Once we get openHistorian 2.0 closer to release, we will work on maturing the openVisN application. Thanks for your interest!
Sep 3, 2013 at 8:00 PM
Steven,

I completely understand your perspective. However, we are trying to make a decision on the database solution for our future FNET server infrastructure. Two main points weight on this decision is 1)managment point of view: how reliable it is, size of db and how easy it is to backup, 2) user point of view: how easy and efficient it is to convert data from old format and to extract data of a certain period of time. As a server manager, my concern are these two points, but for all the rest 20 people in my lab including my supervisor, their main concern is the second one. So, in order to proceed any further with choosing openHistorian, I need to prove the second point. I think openVisN already has great foundation of visualization, thanks to your great work! Is it possible for you to provide a very simple demo of extraction tool(get a certain period of data of one device from historian). And I can develop one customized to our system, so that we can proceed with openHistorian2.0. We really like to contribute in the community as a testing user.
And another question is that is it possible to access Historian easily using c++? How about other languages?

Thank you very much.

Zoe
Developer
Sep 4, 2013 at 3:24 AM
I know where you are coming from. I'm a full time employee of Oklahoma Gas & Electric and I am the sole manager our SynchroPhasor database (~15TB in size) and all of our SynchroPhasor applications (8 ish). (We are up to ~200 PMUs at 30 samples per second). In production, we are using a custom SQL Server implementation in production that I wrote 3 years ago. We have openHistorian 2.0 in our development environment, and I literally spent less than a day integrating the openHistorian 2.0 into our existing applications.

Ease of server management and ease of application development is key to our adoption of the openHistorian 2.0 also. What I have found over the past few years is that our SQL Server implementation is only as reliable as I am. If I am not constantly managing storage sizes and database file groups, SQL Server will eventually run out of space and we will lose a ton of data. Over the past 3 years, this has happened a few dozen times, totaling to maybe 2 week worth of down time. One of the major benefit of the openHistorian is that the main developer is a heavy user of the application. I hope to find and solve many of these server management and ease of use issues before the public does.

About Backups: We are still working on how files will be written to the disk. Before release, we will have the option to specify a "Working" directory and multiple "Completed" directories. Right now the historian only stores to the "Working" directory. You may notice that the archive files have strange names and are constantly written to and deleted. At the end of every day (or user defined time), all of the data from the "Working" directory will be combined and condensed into a single file that will be placed in the "Completed" directory. Once placed in the completed directory, these files will not change. So database backups can be performed simply by backing up that directory. This means that once a file has been backed up, it never has to be backed up again (Unlike SQL Server Backups).

About Free Space Management:
When we are dealing with a 15TB database, we like to create multiple LUNs on our SAN so windows storage management will be easier. Right now we have 7 LUNs that we manage. This means we have to continue to add free space to a LUN, then tell SQL Server to utilize this free space. When we add "Completed" directories to the openHistorian, you will be able to specify rules about how much free space to leave in each directory. This will greatly reduce the amount of server management that I will have to do and I hope it benefits others as well.

About Usability:
Since the openHistorian is effectively a client-server application, any number of clients can get data from the historian. So any number of clients can be use to communicate to it via a socket. We have plans to include a SQL CLR implementation of the openHistorian to help with integration--though, this may not be available upon release. Any specific integration other than .NET is currently not planned until after release.

About Reliability:
Currently, the openHistorian Alpha has been very reliable at OG&E. It even continues to run and be available when running completely out of disk space on our development server. (We don't have data retirement programmed yet, but I can write about 600GB of archive files in a single restart of the service without it crashing, not bad for an Alpha). Ideally we will be able to be as available as SQL Server. We are considering a load-balancing, cluster server approach for data so we can ensure zero data loss during planned or unplanned outages. This likely won't be available for the release.

There are a handful of ways to extract data from the historian. Fundamentally, you will specify a start/stop time, and pointIDs[], and get a IEnumerable<Key/Value>. It's actually a bit more code than that, but fundamentally, this is how it works. If you are using C#, you will get many more options, like the ability to time correlate multiple points, or export it as a .NET DataTable. If you have a basic idea of what you want to demo, let me know and I can put something together.
Sep 4, 2013 at 4:23 PM
Steven,

Thank you so much for such complete explanation. It helps a great lot! We are looking at other DB solutions like SQL server, mongo, cassandra, hadoop. But after the user forum and especially what you just explained, openHistorian2.0 is really my favorite option. I really appreciate your great work in developing this wonderful program and your patience to explain it to me in such details. I understand that it is just alpha version, but I would very much like to start using it, grow with it, and hopefully help contributing as well.

I am now trying to put together something to show to my group to support my decision. I would really appreciate a demo of the following procedure. (select a directory that contains the historian)->select one device->select starting time->select end time->select the signals to extract->click "extract" button to get a csv file. This would give me a perfect starting point to build a customized demo for our group.

I know that you mention about communicating via socket, which may not be available upon release. In this case, is that possible to map the complete directory(a backup version of course) as a shared folder in our group network and enable them to extract data by just using the extraction tool on that directory?

Thanks a lot for your help! Really really appreciate it!

Zoe
Developer
Sep 4, 2013 at 5:32 PM
Edited Sep 4, 2013 at 5:34 PM
The historian socket communication is currently functional, but only for c#. This socket interface that I'm describing is different than the Gateway Exchange Protocol and is the preferred way to communicate to the historian due to its superior speed. The Gateway Exchange Protocol works and I believe there are c++ adapters for it.

Right now, we do not support multiple historian instances sharing direct file access over a network share. This introduces quite a bit of complexity when one instance is archiving and another few instances are reading from the files. We have implemented these file locking mechanisms in the openHistorian server software itself rather than network file locks.

I'll see about putting together a demo for data extraction. Fundamentally, this is the code that you will use: Note, this is untested code but should work.
HistorianClientOptions clientOptions = new HistorianClientOptions();
clientOptions.DefaultDatabase = "PPA";
clientOptions.NetworkPort = 38402;
clientOptions.ServerNameOrIp = "127.0.0.1"; //IP address of server.

using (var client = new HistorianClient<HistorianKey, HistorianValue>(clientOptions))
{
    var database = client.GetDatabase();
    using (var reader = database.OpenDataReader())
    {
        using (var csvStream = new StreamWriter("C:\\temp\\file.csv"))
        {
            csvStream.Write("Timestamp,PointID,Value,Quality");
            var stream = reader.Read(DateTime.MinValue, DateTime.MaxValue, new ulong[] { 1, 2, 3 }); //Filter parameters
            while (stream.Read())
            {
                csvStream.WriteLine("{0},{1},{2},{3}", stream.CurrentKey.TimestampAsDate, stream.CurrentKey.PointID, stream.CurrentValue.AsSingle, stream.CurrentValue.Value3);
            }
            csvStream.Flush();
        }
    }
    database.Disconnect();
}
Sep 4, 2013 at 6:21 PM
Edited Sep 4, 2013 at 6:25 PM
Steven,

Thanks a lot for such prompt response! Really appreciate it!

I understand the complexity of read/write locks on the database and your solution. We are planning on having a file server only keeping the database(which means it will not archive but just store the "complete" directory).We will setup some copy script to transfer the complete archived historian to the file server to backup. In that case, can we change the serverIP parameter to that server and run the extracting software to let multiple people have access to it? We constantly run data analysis and data visualization for utilities, so it's an important feature for our group.

And that code is really helpful to clear a lot of my confusion! I can start working on that.

Thank you so much again for your help!

Zoe
Sep 13, 2013 at 2:40 PM
Steven,

Thank you so much for your sample code. I managed to have an extraction GUI tool ready as a demo for my group. Your API is very easy to use and understand. Thanks a lot for your great work!

Zoe