Archive for category Information Technology

XLSX Template with EPPlus and Web API, save as XLSX or PDF

I’ve been using NPOI for years to create/edit XLS documents via C#, and it works perfect. If you want to work with XLS on your server without installing Office and using COM, check out NPOI and I’m sure you won’t be disappointed.  Since I migrated our entire company over to Office365, everybody is now running Office 2013+ and I’m constantly being asked why my apps are spitting out XLS files that give everybody an annoying compatibility warning and limit the rows to 65K.  Since I needed a mental break (more like a learning stretch) on a new project, I decided to try a out EPPlus for managing XLSX files via C# without COM.

Most of the time I develop reports/document/etc… with Excel, I like to start with a formatted template. This makes the initial conversations with the users easy, and it abstracts the formatting from my code (insert data into per-formatted cell). I also find that 1/2 my changes after go-live are formatting related and by using Excel I can adjust and deploy the update in seconds. I did a bit of looking around and I choose EPPlus, since the project is active and there is a NuGet package available (EPPlus, Version 4.0.4).

Here is my application UI, which needs a new excel document


The UI is pretty simple, the workflow is based on the warehouse staff adding delivery notes (warehouse orders) to a shipment until the shipping coordinator locks the shipment and prepares it for pickup (palletize, wrap, shipping labels, etc…). The UI follows the physical process. Once the shipment is locked, the warehouse staff are unable to add additional orders to the shipment. The process of locking, will schedule the next pickup and display a link to download the Excel document. I’ve been asked to support PDF as well, as the warehouse stations are moving away from using Excel. This article follows the development of the Bill of Lading (“BOL”). The interface is 100% Ajax driven with jQuery and Web API on the back end.

Steps to Implement BOL with EPPlus & Web API

Item #1: Create Excel XLSX template with Excel 2013

Here is the excel template I created (layout, colors, formatting, formulas for totals, etc…).


I created Warehouse_BoL.xlsx and stored it in the root of my project in a folder called /template/report/, this is the location where I keep all of my report templates.

Item #2 ( Populate Template and Return as XLSX/PDF )

Step #1: Install EPPlus via NuGet

Package Manager Console: PM> Install-Package EPPlus

Step #2: Populate Template

Issue #1 (dynamic rows):

As I started thinking about how I was going to support 300+ rows, I came up with a VERY simple solution. I created my template with rows 12-400 being allotted and formatted for data. I would then load the file with data, and delete the BLANK rows. Since I wanted my document to look pretty when printed on a single page (as this should be 90% of the documents printed), I decided to always leave up to row 35 in the excel file regardless of the number of rows of data added. This keeps the single page print, pretty.

Issue #2 (formula refresh):

When I got the export work and moved to exporting to PDF I found my formulas to total cartons/weight were showing “0” in the report. I’m not sure if this is a bug, but I found notes in the documentation that you can call “Calculate()” on the cell to force the formula to refresh. Worked perfect.

using OfficeOpenXml;
using System.IO;

public static MemoryStream GenerateBOL(int id)
        // Template File
        string templateDocument = 

        // Results Output
        MemoryStream output = new MemoryStream();

        // Read Template
        using (FileStream templateDocumentStream = File.OpenRead(templateDocument))
            // Create Excel EPPlus Package based on template stream
            using (ExcelPackage package = new ExcelPackage(templateDocumentStream))
                // Grab the sheet with the template, sheet name is "BOL".
                ExcelWorksheet sheet = package.Workbook.Worksheets["BOL"];

                // SubSonic 2x - Data Acces, load shipment header data.
                Shipment shipment = Shipment.FetchByID(id);

                // Insert BOL header data into template
                sheet.Cells[2, 3].Value = shipment.ScheduledCarrier + " PICK-UP";
                sheet.Cells[3, 3].Value = string.Format("{0} ( {1}-{2} )",
                sheet.Cells[6, 6].Value = shipment.DestinationAddress1;
                sheet.Cells[7, 6].Value = shipment.DestinationAddress2;
                sheet.Cells[8, 6].Value = shipment.DestinationAddress3;
                sheet.Cells[9, 6].Value = shipment.DestinationAddress4;

                // Start Row for Detail Rows
                int rowIndex = 12;

                // SubSonic 2x - Data Access, load shipment details
                using (IDataReader reader = SPs.RohmPortal_GetShipmentOrderStatistics(id).GetReader())
                    while (reader.Read())
                        // Cell 1, Carton Count
                        sheet.Cells[rowIndex, 1].Value = Int32.Parse(reader["Cartons"].ToString());
                        // Cell 2, Order Number (Outline around columns 2-7 make it look like 1 column)
                        sheet.Cells[rowIndex, 2].Value = reader["DNNO"].ToString().Trim();
                        // Cell 8, Weight in LBS (convert KG to LBS, and rounding to whole number)
                        sheet.Cells[rowIndex, 8].Value =
                            Math.Round(decimal.Parse(reader["GrossWeight"].ToString()) * 2.2m);

                        // Increment Row Counter

                // Force formula to compute, or formulas are not updated when you convert XLSX / PDF.
                sheet.Cells[402, 1].Calculate();
                sheet.Cells[402, 8].Calculate();

                // Logic to keep up to row 35 when the data is short (pretty 1 page).
                if (rowIndex < 35)
                    sheet.DeleteRow(36, 400 - 36);
                else if (rowIndex < 400)
                    sheet.DeleteRow(rowIndex, 400 - rowIndex);

            return output;
    catch (Exception e)
        // Log Exception
        return null;

Next, I needed to create a Web API route to generate the XLS file. Seeing how I planned to return at least 2 different formats XLSX/PDF, I created a generic wrapper and a method to return the data as a stream.

Generic – Return Stream as File Method

public static HttpResponseMessage ReturnStreamAsFile(MemoryStream stream, string filename)
    // Set HTTP Status Code
    HttpResponseMessage result = new HttpResponseMessage(HttpStatusCode.OK);

    // Reset Stream Position
    stream.Position = 0;
    result.Content = new StreamContent(stream);

    // Generic Content Header
    result.Content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
    result.Content.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment");

    //Set Filename sent to client
    result.Content.Headers.ContentDisposition.FileName = filename;

    return result;

Generic – API Wrapper to Generate XLSX/PDF.

public HttpResponseMessage GetBillOfLadding(int id, string format)
    MemoryStream documentStream = GenerateBOL(id);

    // Make Sure Document is Loaded
    if (documentStream != null && documentStream.Length > 0)
        // Generate dynamic name for BOL document "BOL-XXXXXX.PDF or .XLSX"
        string documentName = string.Format("BOL-{0}.{1}", id, format);

        // Based on format requested in URI (PDF/XLSX), call the associated method.
        switch (format)
            case "PDF":
                MemoryStream pdfMemoryStream = ConvertXLSXtoPDF(documentStream);
                return FileAsAttachment(pdfMemoryStream, documentName);

            case "XLSX":
                return FileAsAttachment(documentStream, documentName);

    // If something fails or somebody calls invalid URI, throw error.
    return new HttpResponseMessage(HttpStatusCode.NotFound);

Step #3: Install FreeSpire.XLS

Package Manager Console: PM> Install-Package FreeSpire.XLS

Free version is limited to 5 sheets per workbook and 150 rows per sheet. This limitation is enforced during reading or writing XLS or PDF files. Free Spire.XLS v7.8 has no any limitation when loading and saving .xlsx file format. When converting Excel files to PDF files, you can only get the first 3 pages of PDF file.

Step #4: Create method to convert XLSX to PDF

Yep, FreeSpire.NET has a limit of 3 pages. Yuck!!! I was short on time and I’d love to find another simple open simple open source solution, but E-ICEBLUE’s Spire.XLS code really made the XLSX to PDF process very simple and I had working code in 5 minutes. The DLL is ~12MB, which is 10x larger than my entire web application, but it works!

If you know of a plugin solution or one that requires little tweaking to generate XLSX documents into PDF, please let me know!!!

using Spire.Xls;
using Spire.Pdf;
using Spire.Xls.Converter;

// Convert XLSX to PDF MemoryStream
public static MemoryStream ConvertXLSXtoPDF(MemoryStream stream)
    // Result Stream
    MemoryStream pdfMemoryStream = new MemoryStream();

    // Spire.XLS to open XLSX workbook stream created by EPPlus
    Workbook workbook = new Workbook();
    workbook.LoadFromStream(stream, ExcelVersion.Version2013);

    // Spire.PDF to convert XLSX to PDF, I read it has limited functionality (total pages, rows, etc...).
    PdfConverter pdfConverter = new PdfConverter(workbook);
    PdfConverterSettings settings = new PdfConverterSettings();
    PdfDocument pdfDocument = pdfConverter.Convert(settings);
    return pdfMemoryStream;

I found references saying to use “Workbook.SaveToFile()” to generate the PDF file, but this was resulting in errors (blank PDF files). Using PdfConverter() works, but you should expect compile warning that the method is obsolete.


XLSX Excel Document



PDF Document



Storage Processor Failover with a EMC Celerra NX4 / Clarion AX4

We’ll yesterday was the first time in many years that I had to spend an entire day in the office on a weekend due to a major hardware outage. This is never a good thing and it really *feels* bad when your team fails to keep core operations running smoothly. As a IT Manager, I strive for 0% unscheduled downtime and 95% up-time overall (Yep, that 5% might sound high but due to the volume of updates, backups, patches, hardware migrations, performance improvement tasks, etc… Thing need to go down for schedule work).

** I think Benjamin Franklin’s quote about a Apple a day, can be adapted to “A update a day, keeps another hacker away!”.

As for my data unavailable (DU) outage, the core symptom was our SQL Server 2008 host connect via fiber HBA to the SAN, loosing all of it’s LUNS. When my DBA saw this, she was on the phone with me in minutes yelling that all of her table had disappeared from every database in the system.  She said that the system went down at 1:45 AM and all of our backups failed and all of our databases were empty except for the system tables.  I immediately hopped online and realized this symptom was caused by SQL Server showing unmounted databases (why not say offline, unavailable), I’ll leave that question to Microsoft to explain…. but it definitely left her in a panic seeing all the data missing from every database on the server.  Since I’m familiar with the issue (I think I’ve caused or fixed a few thousand issues, over my past 10 years of managing SQL Server), I packed the car and headed into the office.

Thinking we had DR built into our very expensive EMC SAN, I was going crazy trying to figure out why we were not online. I put in a severity 1 request with EMC telling them that we had DU and to get somebody on the phone ASAP.  I grabbed a SP collect and sent it off to EMC and tech support person 1/7 found a bad fiber port on storage processor A (SP A – SFP FE 0) and said this was my issue.  Since I know we have 2 storage processors, I asked him about moving the resource to B, but he said he only knew the Celerra part of our system and declined helping saying I would need to work with the Clarion tech who would be dispatched to replace SFP FE0 in 4 hours, and be on-site with parts within 24 hours. I asked multiple times if he could directly transfer me to somebody on the Clarion team to get my system online and data available, but he kept saying NO.  I  finally gave up on the call, so I could log some NASTY messages in the call ticket to get somebody moving ASAP (This works very well with EMC, so if you can’t get the online person to solve your problem then post all your issues, comments, and concerns in the ticketing system).

Into the server room I went, one of the few times I wish the room was a bit warmer than 67 degrees… I manually moved the host fiber cable connected to the HBA over to storage processor B and saw my LUNs show up on the HOST in seconds, but they were all showing as unrecognized and prompting for initialization. Knowing there was something wrong, I spent about a hour looking in the NaviSpehere, UniSphere, and CLI (both GUIs were new to me because 38 days prior EMC did a major update to our system). After getting half-way there, I decided to be brave to re-seat storage processor A.  This ended up restoring data, but it only last for ~45 minutes before going offline again.  At this point, the Clarion tech called and said he be in site in 1 hour, at 2PM with parts. Upon his arrival with a new SFP FE0 port, I asked him to first help me get online with storage processor B so we could get data available while working on storage processor A. He also didn’t know the new interface and we proceeded to go through 5 Clarion tech support people, 2 shift changes, and 4 more hours before ending up with somebody who knew what to do.  As I had joked to the on-site tech a few times about it probably being 1 button or switch to set, the online tech showed us a single magic button to fix the problem!  ARGH, 15 hours of downtime and 7 tech support people from EMC kept my data unavailable.   The issue with the fail-over to storage processor B, was permission with the virtual disk being assigned only to storage processor A, so we moved it from A -> B and I was finally online in seconds.  Since this guy seemed to KNOW the system pretty well, I asked why automatic DR fail-over did not work and why manually assigning a virtual disk to a node was needed. He realized the question meant there was else wrong, since he too was confused why this whole permission change was needed in the first place. Another hour of research from the online tech, and he determined I was missing EMC PowerPath software on all my hosts connected to the SAN via iSCSI and Fiber.

At this point, totally confused I had to get a 30 minutes lesson on PowerPath and how it works and where it was supposed to be installed.  Finally, after hashing out the whole situation to determine the root cause with the; original designer, past employee who lead the project for me, my current team, Celerra support and the on-site Clarion support it was determined that the automatic fail-over features of the SAN were never setup correctly and we have never had an outage to trigger this scenario. This was a *shock* to everybody involved, but it explains a few mysterious events that have happened over the past 6 months (cluster resources becoming unavailable, complete SAN failure during upgrade, high IO on individual storage processors, etc…) that EMC support could not explain.

So, the end result and lesson learned is to always install PowerPath on all non-clustered hosts that connect to your EMC SAN if you want automatic fail-over.  If you do have this setup and have a storage processor fail, follow the following steps:

1. Connect to Navisphere
2. Click “Virtual Disks”
3. Choose your virtual to edit and click the name.
4. Click “Virtual Disk Access Details”


5. Click “Change Ports for Access”
** On this screen, it will show where the virtual disk is currently assigned (e.g. SP A  or SP B).

That’s it, hopefully this saves somebody 15 hours of downtime confused tech support people and/or a mis-configured DR setup!

Download Your Site to HTML

HTTrack website copier is awesome and FREE! I’ve been using it for a few months and I’ve already saved myself and my team members hundreds of hours by downloading full working copies of various websites locally.

Here are a few examples of websites I download.

  1. A client was having problems with their hosting company who refused to give them access to their source code (developed by a 3rd party) to their CF website. After weeks of trying to solve the problem, they decided to switch hosting companies. During their migration to a hosting company and a new CMS solution, they wanted to keep a copy of the old site online. Regardless of the engine running the website, HTTrack was able to download a working copy of the site to HTML, minus the dynamic features (Search, CMS Admin, etc…) in a few minutes.
  2. We inherited a old web server at working running Plone 2.0.4. The machine was running on old hardware that was failing (2/3 dead HDD, broken RAID 1) and Zope was randomly crashing. After spending 3 days trying to get the machine stable, I decide to convert the site to HTML and put on one of our newer “stable” web servers. The process was a little mess, since our plone instance pointed to folders paths vs. actual files (/index_html vs /index.hmtl) the final static site had a lot of weird and randomly directed links! The final static HTML site probably took ~8 solid hours of HTML clean-up, but there was also a lot of bad/obsolete content in the existing portal that made this process painful!
  3. I have a few websites I like to use as samples/demo that show better as interactive! Using HTTrack and controlling the depth of link following/downloading (like first page only), I can download an offline working copy of a sites main page! This is a great way to demo a feature/capability/competitor website…

A few other good things to know, a couple sites I’ve downloaded require you to login first. This isn’t an issue with HTTrack, since you can build your own cookies to apply to your capture. It was a bit tricky getting the cookie format correct (Used Netscape Cookie Format), but after a few minutes I was able to take the cookie data from IE/FireFox and have a perfect working cookie for HTTrack to use!

Once again, this is FREE utility that is handy to have around!