XLSX Template with EPPlus and Web API, save as XLSX or PDF

I’ve been using NPOI for years to create/edit XLS documents via C#, and it works perfect. If you want to work with XLS on your server without installing Office and using COM, check out NPOI and I’m sure you won’t be disappointed.  Since I migrated our entire company over to Office365, everybody is now running Office 2013+ and I’m constantly being asked why my apps are spitting out XLS files that give everybody an annoying compatibility warning and limit the rows to 65K.  Since I needed a mental break (more like a learning stretch) on a new project, I decided to try a out EPPlus for managing XLSX files via C# without COM.

Most of the time I develop reports/document/etc… with Excel, I like to start with a formatted template. This makes the initial conversations with the users easy, and it abstracts the formatting from my code (insert data into per-formatted cell). I also find that 1/2 my changes after go-live are formatting related and by using Excel I can adjust and deploy the update in seconds. I did a bit of looking around and I choose EPPlus, since the project is active and there is a NuGet package available (EPPlus, Version 4.0.4).

Here is my application UI, which needs a new excel document

2015-11-07_21-12-29

The UI is pretty simple, the workflow is based on the warehouse staff adding delivery notes (warehouse orders) to a shipment until the shipping coordinator locks the shipment and prepares it for pickup (palletize, wrap, shipping labels, etc…). The UI follows the physical process. Once the shipment is locked, the warehouse staff are unable to add additional orders to the shipment. The process of locking, will schedule the next pickup and display a link to download the Excel document. I’ve been asked to support PDF as well, as the warehouse stations are moving away from using Excel. This article follows the development of the Bill of Lading (“BOL”). The interface is 100% Ajax driven with jQuery and Web API on the back end.

Steps to Implement BOL with EPPlus & Web API

Item #1: Create Excel XLSX template with Excel 2013

Here is the excel template I created (layout, colors, formatting, formulas for totals, etc…).

2015-11-07_19-33-13

I created Warehouse_BoL.xlsx and stored it in the root of my project in a folder called /template/report/, this is the location where I keep all of my report templates.

Item #2 ( Populate Template and Return as XLSX/PDF )

Step #1: Install EPPlus via NuGet

Package Manager Console: PM> Install-Package EPPlus

Step #2: Populate Template

Issue #1 (dynamic rows):

As I started thinking about how I was going to support 300+ rows, I came up with a VERY simple solution. I created my template with rows 12-400 being allotted and formatted for data. I would then load the file with data, and delete the BLANK rows. Since I wanted my document to look pretty when printed on a single page (as this should be 90% of the documents printed), I decided to always leave up to row 35 in the excel file regardless of the number of rows of data added. This keeps the single page print, pretty.

Issue #2 (formula refresh):

When I got the export work and moved to exporting to PDF I found my formulas to total cartons/weight were showing “0” in the report. I’m not sure if this is a bug, but I found notes in the documentation that you can call “Calculate()” on the cell to force the formula to refresh. Worked perfect.

using OfficeOpenXml;
using System.IO;

public static MemoryStream GenerateBOL(int id)
{
    try
    {
        // Template File
        string templateDocument = 
            HttpContext.Current.Server.MapPath("~/templates/report/warehouse_bol.xlsx");

        // Results Output
        MemoryStream output = new MemoryStream();

        // Read Template
        using (FileStream templateDocumentStream = File.OpenRead(templateDocument))
        {
            // Create Excel EPPlus Package based on template stream
            using (ExcelPackage package = new ExcelPackage(templateDocumentStream))
            {
                // Grab the sheet with the template, sheet name is "BOL".
                ExcelWorksheet sheet = package.Workbook.Worksheets["BOL"];

                // SubSonic 2x - Data Acces, load shipment header data.
                Shipment shipment = Shipment.FetchByID(id);

                // Insert BOL header data into template
                sheet.Cells[2, 3].Value = shipment.ScheduledCarrier + " PICK-UP";
                sheet.Cells[3, 3].Value = string.Format("{0} ( {1}-{2} )",
                    shipment.CustomerName,
                    shipment.CustomerCode,
                    shipment.DelToCode);
                sheet.Cells[6, 6].Value = shipment.DestinationAddress1;
                sheet.Cells[7, 6].Value = shipment.DestinationAddress2;
                sheet.Cells[8, 6].Value = shipment.DestinationAddress3;
                sheet.Cells[9, 6].Value = shipment.DestinationAddress4;

                // Start Row for Detail Rows
                int rowIndex = 12;

                // SubSonic 2x - Data Access, load shipment details
                using (IDataReader reader = SPs.RohmPortal_GetShipmentOrderStatistics(id).GetReader())
                {
                    while (reader.Read())
                    {
                        // Cell 1, Carton Count
                        sheet.Cells[rowIndex, 1].Value = Int32.Parse(reader["Cartons"].ToString());
                        // Cell 2, Order Number (Outline around columns 2-7 make it look like 1 column)
                        sheet.Cells[rowIndex, 2].Value = reader["DNNO"].ToString().Trim();
                        // Cell 8, Weight in LBS (convert KG to LBS, and rounding to whole number)
                        sheet.Cells[rowIndex, 8].Value =
                            Math.Round(decimal.Parse(reader["GrossWeight"].ToString()) * 2.2m);

                        // Increment Row Counter
                        rowIndex++;
                    }
                }

                // Force formula to compute, or formulas are not updated when you convert XLSX / PDF.
                sheet.Cells[402, 1].Calculate();
                sheet.Cells[402, 8].Calculate();

                // Logic to keep up to row 35 when the data is short (pretty 1 page).
                if (rowIndex < 35)
                {
                    sheet.DeleteRow(36, 400 - 36);
                }
                else if (rowIndex < 400)
                {
                    sheet.DeleteRow(rowIndex, 400 - rowIndex);
                }

                package.SaveAs(output);
            }
            return output;
        }
    }
    catch (Exception e)
    {
        // Log Exception
        return null;
    }
}

Next, I needed to create a Web API route to generate the XLS file. Seeing how I planned to return at least 2 different formats XLSX/PDF, I created a generic wrapper and a method to return the data as a stream.

Generic – Return Stream as File Method

public static HttpResponseMessage ReturnStreamAsFile(MemoryStream stream, string filename)
{
    // Set HTTP Status Code
    HttpResponseMessage result = new HttpResponseMessage(HttpStatusCode.OK);

    // Reset Stream Position
    stream.Position = 0;
    result.Content = new StreamContent(stream);

    // Generic Content Header
    result.Content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
    result.Content.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment");

    //Set Filename sent to client
    result.Content.Headers.ContentDisposition.FileName = filename;

    return result;
}

Generic – API Wrapper to Generate XLSX/PDF.

[Route("api/warehouse/schedule/bol/{id}/{format}")]
[HttpGet]
public HttpResponseMessage GetBillOfLadding(int id, string format)
{
    MemoryStream documentStream = GenerateBOL(id);

    // Make Sure Document is Loaded
    if (documentStream != null && documentStream.Length > 0)
    {
        // Generate dynamic name for BOL document "BOL-XXXXXX.PDF or .XLSX"
        string documentName = string.Format("BOL-{0}.{1}", id, format);

        // Based on format requested in URI (PDF/XLSX), call the associated method.
        switch (format)
        {
            case "PDF":
                MemoryStream pdfMemoryStream = ConvertXLSXtoPDF(documentStream);
                return FileAsAttachment(pdfMemoryStream, documentName);
                break;

            case "XLSX":
                return FileAsAttachment(documentStream, documentName);
                break;
        }
    }

    // If something fails or somebody calls invalid URI, throw error.
    return new HttpResponseMessage(HttpStatusCode.NotFound);
}

Step #3: Install FreeSpire.XLS

Package Manager Console: PM> Install-Package FreeSpire.XLS

Free version is limited to 5 sheets per workbook and 150 rows per sheet. This limitation is enforced during reading or writing XLS or PDF files. Free Spire.XLS v7.8 has no any limitation when loading and saving .xlsx file format. When converting Excel files to PDF files, you can only get the first 3 pages of PDF file.

Step #4: Create method to convert XLSX to PDF

Yep, FreeSpire.NET has a limit of 3 pages. Yuck!!! I was short on time and I’d love to find another simple open simple open source solution, but E-ICEBLUE’s Spire.XLS code really made the XLSX to PDF process very simple and I had working code in 5 minutes. The DLL is ~12MB, which is 10x larger than my entire web application, but it works!

If you know of a plugin solution or one that requires little tweaking to generate XLSX documents into PDF, please let me know!!!

using Spire.Xls;
using Spire.Pdf;
using Spire.Xls.Converter;

// Convert XLSX to PDF MemoryStream
public static MemoryStream ConvertXLSXtoPDF(MemoryStream stream)
{
    // Result Stream
    MemoryStream pdfMemoryStream = new MemoryStream();

    // Spire.XLS to open XLSX workbook stream created by EPPlus
    Workbook workbook = new Workbook();
    workbook.LoadFromStream(stream, ExcelVersion.Version2013);

    // Spire.PDF to convert XLSX to PDF, I read it has limited functionality (total pages, rows, etc...).
    PdfConverter pdfConverter = new PdfConverter(workbook);
    PdfConverterSettings settings = new PdfConverterSettings();
    PdfDocument pdfDocument = pdfConverter.Convert(settings);
    pdfDocument.SaveToStream(pdfMemoryStream);
            
    return pdfMemoryStream;
}

I found references saying to use “Workbook.SaveToFile()” to generate the PDF file, but this was resulting in errors (blank PDF files). Using PdfConverter() works, but you should expect compile warning that the method is obsolete.

THE RESULTS

XLSX Excel Document

/api/warehouse/schedule/bol/14240002/XLSX

BOL-14240002-1.XLSX__[Read-Only]_-_Excel_(Product__2015-11-07_21-16-50

PDF Document

/api/warehouse/schedule/bol/14240002/PDF

BOL-14240002.PDF_-_Adobe_Acrobat_Pro_2015-11-07_21-13-41

Adding Web API 2.1 to Web Form App

Like many others, I have a huge pile of legacy apps that I maintain that are running very smooth with no justification for a major upgrade/update, lots of small code tweaks and simple enhancements, but nothing major. I had a new request the other day to add a basic warehouse tracking system that people could add packages too by scanning bar codes. Very simple process and something I could easily do with jQuery, jQuery tmpl plugin, and a back-end service. Before running off to build a WCF service, I decided to take a peek at Web API (version 2.1 included with Visual Studio 2013) to learn something new.

The MSDN tutorials were pretty straight forward but missing a few step, that I didn’t figure out till I messed things up a few times. Nothing terrible, and with GIT you can just toss the changes and start over again. ;)

Here is the play-by-play to add Web API 2.1 to Web Forms Application

  1. In the root of your Web Forms application create a folder called “api“.

    ** The name and location of this folder is best practice for Web API controllers.

     

  2. Right click on your new “api” folder and choose Add -> New Item from the right context menu.

    On the “Add New Item” menu, go to Web and choose Web API Controller Class (v2.1) and name your controller [Something]Controller.cs.

    ** The standard naming convention for Web API is to put the Suffix “Controller” on your Web API name. The world “Controller” will never be used in api calls, but it’s important for the convention based routing. In the example below, I created a Web API for my Inventory functions, I gave it a name of “InventoryController.cs”.

     

    using System.Collections.Generic;
    using System.Web.Http;
    
    namespace  ZachHunter.DemoProject.api
    {
        public class InventoryController : ApiController
        {
            // GET api/<controller>
            public IEnumerable<string> Get()
            {
                return new string[] { "value1", "value2" };
            }
    
            // GET api/<controller>/5
            public string Get(int id)
            {
                return "value";
            }
    
            // POST api/<controller>
            public void Post([FromBody]string value)
            {
            }
    
            // PUT api/<controller>/5
            public void Put(int id, [FromBody]string value)
            {
            }
    
            // DELETE api/<controller>/5
            public void Delete(int id)
            {
            }
        }
    }
    

     

  3. Next go to your “App_Start” folder in the root of your application (if you don’t have this folder, create it) and right click on the folder to add a new class named “WebApiConfig.cs”.

    ** This is recommended location to store your Web API routing configuration.

    Add the following two routing rules into your WebApiConfig.cs file to enable convention and attribute based routing

     

    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Web;
    using System.Web.Http;
    
    namespace ZachHunter.DemoProject
    {
        public static class WebApiConfig
        {
            public static void Register(HttpConfiguration config)
            {
                // Attribute Routing
                // Allows for custom routing to be set at the method, example [Route("api/warehouse/scheduling/all")]
                config.MapHttpAttributeRoutes();
    
                // Convention-Based Routing
                // Allows for standard controller based routing, example api/warehouse, api/warehouse/1
                config.Routes.MapHttpRoute(
                    name: "DefaultApi",
                    routeTemplate: "api/{controller}/{id}",
                    defaults: new { id = RouteParameter.Optional }
                );
            }
        }
    }
    

     

  4. Finally, open your Global.asax.cs and add modify your Application_Start to load your Web API routing configuration.

     

            protected void Application_Start(Object sender, EventArgs e)
            {
                // Here are settings to change the default JSON.net Json formatting to indented, many other options to choose.
                HttpConfiguration config = GlobalConfiguration.Configuration;
                config.Formatters.JsonFormatter.SerializerSettings.Formatting = Newtonsoft.Json.Formatting.Indented;
    
                // WebApi Routing, loads up your Web API routes stored in App_Start/WebApiConfig.cs
                GlobalConfiguration.Configure(WebApiConfig.Register);
            }
    

Automatic changes made when you added Web API 2.1 Class

References:

Newtonsoft.Json.dll
System.Net.Http.dll
System.Web.Http.WebHost.dll

Web.Config

    <handlers>
      <remove name="ExtensionlessUrlHandler-Integrated-4.0" />
      <remove name="OPTIONSVerbHandler" />
      <remove name="TRACEVerbHandler" />
      <add name="ExtensionlessUrlHandler-Integrated-4.0" path="*." verb="*" type="System.Web.Handlers.TransferRequestHandler" preCondition="integratedMode,runtimeVersionv4.0" />
    </handlers>
  ...
  <runtime>
    <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
      <dependentAssembly>
        <assemblyIdentity name="Newtonsoft.Json" publicKeyToken="30ad4fe6b2a6aeed" culture="neutral" />
        <bindingRedirect oldVersion="0.0.0.0-7.0.0.0" newVersion="7.0.0.0" />
      </dependentAssembly>
    </assemblyBinding>
  </runtime>

When your all done, you can browse to the api url by running your project and typing in the convention base syntax listed in the comments above the auto generated method (e.g. api/inventory). If you use IE, it will default to loading JSON or you can use FireFox and it will default to XML.

Her is a basic call in jQuery using the Web API sample above

$(document).ready(function () {

            $.ajax({
                type: 'GET',
                url: '/api/inventory,
                contentType: 'application/json; charset=utf-8',
                dataType: 'json',
                data: '',
                success: function (results) {
                    // Do Something with the Results
                },
                error: function (xhr, textStatus, error) {
                    // In case of ERROR, do something else
                }
            });
});

Storage Processor Failover with a EMC Celerra NX4 / Clarion AX4

We’ll yesterday was the first time in many years that I had to spend an entire day in the office on a weekend due to a major hardware outage. This is never a good thing and it really *feels* bad when your team fails to keep core operations running smoothly. As a IT Manager, I strive for 0% unscheduled downtime and 95% up-time overall (Yep, that 5% might sound high but due to the volume of updates, backups, patches, hardware migrations, performance improvement tasks, etc… Thing need to go down for schedule work).

** I think Benjamin Franklin’s quote about a Apple a day, can be adapted to “A update a day, keeps another hacker away!”.

As for my data unavailable (DU) outage, the core symptom was our SQL Server 2008 host connect via fiber HBA to the SAN, loosing all of it’s LUNS. When my DBA saw this, she was on the phone with me in minutes yelling that all of her table had disappeared from every database in the system.  She said that the system went down at 1:45 AM and all of our backups failed and all of our databases were empty except for the system tables.  I immediately hopped online and realized this symptom was caused by SQL Server showing unmounted databases (why not say offline, unavailable), I’ll leave that question to Microsoft to explain…. but it definitely left her in a panic seeing all the data missing from every database on the server.  Since I’m familiar with the issue (I think I’ve caused or fixed a few thousand issues, over my past 10 years of managing SQL Server), I packed the car and headed into the office.

Thinking we had DR built into our very expensive EMC SAN, I was going crazy trying to figure out why we were not online. I put in a severity 1 request with EMC telling them that we had DU and to get somebody on the phone ASAP.  I grabbed a SP collect and sent it off to EMC and tech support person 1/7 found a bad fiber port on storage processor A (SP A – SFP FE 0) and said this was my issue.  Since I know we have 2 storage processors, I asked him about moving the resource to B, but he said he only knew the Celerra part of our system and declined helping saying I would need to work with the Clarion tech who would be dispatched to replace SFP FE0 in 4 hours, and be on-site with parts within 24 hours. I asked multiple times if he could directly transfer me to somebody on the Clarion team to get my system online and data available, but he kept saying NO.  I  finally gave up on the call, so I could log some NASTY messages in the call ticket to get somebody moving ASAP (This works very well with EMC, so if you can’t get the online person to solve your problem then post all your issues, comments, and concerns in the ticketing system).

Into the server room I went, one of the few times I wish the room was a bit warmer than 67 degrees… I manually moved the host fiber cable connected to the HBA over to storage processor B and saw my LUNs show up on the HOST in seconds, but they were all showing as unrecognized and prompting for initialization. Knowing there was something wrong, I spent about a hour looking in the NaviSpehere, UniSphere, and CLI (both GUIs were new to me because 38 days prior EMC did a major update to our system). After getting half-way there, I decided to be brave to re-seat storage processor A.  This ended up restoring data, but it only last for ~45 minutes before going offline again.  At this point, the Clarion tech called and said he be in site in 1 hour, at 2PM with parts. Upon his arrival with a new SFP FE0 port, I asked him to first help me get online with storage processor B so we could get data available while working on storage processor A. He also didn’t know the new interface and we proceeded to go through 5 Clarion tech support people, 2 shift changes, and 4 more hours before ending up with somebody who knew what to do.  As I had joked to the on-site tech a few times about it probably being 1 button or switch to set, the online tech showed us a single magic button to fix the problem!  ARGH, 15 hours of downtime and 7 tech support people from EMC kept my data unavailable.   The issue with the fail-over to storage processor B, was permission with the virtual disk being assigned only to storage processor A, so we moved it from A -> B and I was finally online in seconds.  Since this guy seemed to KNOW the system pretty well, I asked why automatic DR fail-over did not work and why manually assigning a virtual disk to a node was needed. He realized the question meant there was else wrong, since he too was confused why this whole permission change was needed in the first place. Another hour of research from the online tech, and he determined I was missing EMC PowerPath software on all my hosts connected to the SAN via iSCSI and Fiber.

At this point, totally confused I had to get a 30 minutes lesson on PowerPath and how it works and where it was supposed to be installed.  Finally, after hashing out the whole situation to determine the root cause with the; original designer, past employee who lead the project for me, my current team, Celerra support and the on-site Clarion support it was determined that the automatic fail-over features of the SAN were never setup correctly and we have never had an outage to trigger this scenario. This was a *shock* to everybody involved, but it explains a few mysterious events that have happened over the past 6 months (cluster resources becoming unavailable, complete SAN failure during upgrade, high IO on individual storage processors, etc…) that EMC support could not explain.

So, the end result and lesson learned is to always install PowerPath on all non-clustered hosts that connect to your EMC SAN if you want automatic fail-over.  If you do have this setup and have a storage processor fail, follow the following steps:

1. Connect to Navisphere
2. Click “Virtual Disks”
3. Choose your virtual to edit and click the name.
4. Click “Virtual Disk Access Details”

Navisphere-Express

5. Click “Change Ports for Access”
** On this screen, it will show where the virtual disk is currently assigned (e.g. SP A  or SP B).

That’s it, hopefully this saves somebody 15 hours of downtime confused tech support people and/or a mis-configured DR setup!

Save IMAP Attachments with AE.Net.Mail in C#

One of the people in my team ran into a problem yesterday, where a customer could only send data via Email. The customer does not support formal EDI or have IT staffers who can support building or using something more automated. After talking about how the processing needed to be performed 7 days a week, we knew we had to build something automated. I went to my Google and StackOverflow to search for a free IMAP solution (I so wish IMAP/POP/SFTP were supported in the core framework) and found AE.Net.Mail. The product doesn’t have any documentation, but after looking at 2 posts about issues other people were having with with the client, I was able to figure out everything I needed to process my emails and save the attachments out to disk. The project is even support via NuGet, which is a big plus. All of the code below is based on using AE.Net.Mail v1.6.0.0 with .NET 4.0.

Here is the code I built to process email attachments from 2 different customers:

            // Connect to Exchange 2007 IMAP server (locally), without SSL.
            using (ImapClient ic = new ImapClient("<server address>", "<user account>", "<user password>", ImapClient.AuthMethods.Login, 143, false))
            {
                // Open a mailbox, case-insensitive
                ic.SelectMailbox("INBOX");

                // Get messages based on the flag "Undeleted()".
                Lazy<MailMessage>[] messages = ic.SearchMessages(SearchCondition.Undeleted(), false);

                // Process each message
                foreach (Lazy<MailMessage> message in messages)
                {
                    MailMessage m = message.Value;

                    string sender = m.From.Address;
                    string fileName = string.Empty;

                    // Rules by Sender
                    switch (sender)
                    {
                        case "zachary@customerA.com":
                            fileName = ExtractString(m.Subject, "(", ")");

                            foreach (Attachment attachment in m.Attachments)
                            {
                                attachment.Save(@"C:\Demo\" + fileName + Path.GetExtension(attachment.Filename));
                            }
                            break;

                        case "hunter@customerB.com":

                            foreach (Attachment attachment in m.Attachments)
                            {
                                string filePrefix = attachment.Filename.Substring(0, attachment.Filename.IndexOf('_'));

                                switch (filePrefix)
                                {
                                    case "DAILY":
                                        fileName = "CustomerB_Inventory";
                                        break;
                                    case "PULL":
                                        fileName = "CustomerB_Pulls";
                                        break;
                                    case "RECEIPT":
                                        fileName = "CustomerB_Receipts";
                                        break;
                                }

                                attachment.Save(@"C:\Demo\" + fileName + Path.GetExtension(attachment.Filename));
                            }
                            break;
                    }

                    // Delete Message (so it drops out of criteria and won't be double processed)
                    ic.DeleteMessage(m);
                }
            }
        }

        /// <summary>
        /// Helper method to pull out a string between a start and stop string.
        /// Example:
        ///    string story = "The boy and the cat ran to the store.";
        ///    ExtractString(story, "cat", "to"); //returns "ran"
        /// </summary>
        /// <param name="stringToParse">String to search</param>
        /// <param name="startTag">Starting String Pattern</param>
        /// <param name="endTag">Ending String Pattern</param>
        /// <returns>String found between the Starting and Ending Pattern.</returns>
        static string ExtractString(string stringToParse, string startTag, string endTag)
        {
            int startIndex = stringToParse.IndexOf(startTag) + startTag.Length;
            int endIndex = stringToParse.IndexOf(endTag, startIndex);
            return stringToParse.Substring(startIndex, endIndex - startIndex);
        }

Most of the processing is done in the switch statement, since I’m going to apply different parsing rules by customer. This is far from a long term elegant and maintainable longterm solution, but it gets the problem solved quick. Here is the rule summary by customer.

case “zachary@customerA.com”:

Everyday I’m going to get 3 emails from this customer, the subject for each email will be:

“Daily Reports for Customer A (CustomerA_Inventory)”
“Daily Reports for Customer A (CustomerA_Receipts)”
“Daily Reports for Customer A (CustomerA_Pulls)”

All three files have a CSV attachment that is called CustomerA.csv. The file in each email contains different data and I need all 3 written to disk for processing in my EDI solution. My solution was to parses the subject using the helper method, and renames each attachment with the name matching in the (parenthesis). The result of processing the emails with the code above, is 3 files written to disk with names that correctly identify their data content.

C:\Demo\CustomerA_Inventory.CSV
C:\Demo\CustomerA_Receipts.CSV
C:\Demo\CustomerA_Pulls.CSV

case “hunter@customerB.com”:

This customer is similar to the top, but I’m only going to get 1 email with 3 attachments that each have a date appended to the file. I need to save each file to disk without a date stamp, so my EDI processor can be setup to pull in the same “file name” everyday. My solution was to parse the attachment file name and rename the file based on the prefix of the attachment name. The results of processing the email is below.

Source Email:

Attachment 1: DAILY_INVENTORY_06_05_2012.XLS
Attachment 2: DAILY_RECEIPT_06_05_2012.XLS
Attachment 3: DAILY_PULL_06_05_2012.XLS

Results of Processing

C:\Demo\CustomerB_Inventory.XLS
C:\Demo\CustomerB_Pulls.XLS
C:\Demo\CustomerB_Receipts.XLS

Note: Awhile back I looked to see if there was a turn-key app/solution that could pull in emails for processing (basically run as a service on a server, and provide a nice GUI to allow non-technical people to setup rules for email processing), but I couldn’t find anything that worked with our setup. This would ideally be a better solution, since it would allow users to add new rules for new emails at a moments notice, versus my hard-code logic used above.

ASP.NET 4.0 WCF RESTful Services – Part 2

In the first part, we learned how to setup a ASP.NET 4.0 WCF (file-extension less) service and the basics on using jQuery’s $.getJSON() method to call a service implementing the GET method. Now, we are going to review all the HTTP methods commonly found in a RESTful api (GET, POST, PUT, and DELETE). The code below targets the project created in step 1, which I’ll update in my next post. Each HTTP methods/verb is used to describe a process in a CRUD transaction, so I tried to keep things simple by naming accordingly. Everything below was written in Visual Studio 2010 with .NET 4.0.

RESTful HTTP Methods ( GET / POST / PUT / DELETE )

Before we explore the HTTP methods, remember the following rules while building your WCF Services.

  1. The UriTemplate can only use parameters of type string
  2. Uri Templates are very flexible and you can build your own standard syntax for your api.
    Sample Uri Templates for .svc-less function calls

            [WebGet(UriTemplate = "Products/{name}")]
            public Product GetProduct(string name)
            // URL = "/InventoryService/Products/ProductXYZ"
    
            [WebGet(UriTemplate = "Products/Product({name})")]
            public Product GetProduct(string name)
            // URL = "/InventoryService/Products/Product(ProductXYZ)"
    
            [WebGet(UriTemplate = "Products/API.Service?Product={name}")]
            public Product GetProduct(string name)
            // URL = "/InventoryService/Products/API.Service?Product=ProductXYZ"
    
  3. When passing multiple objects to a service, you must wrap the objects.
    ** Difference sending 1 object vs 2 objects **

            //  Sending 1 Object (No Wrapping).
            [WebInvoke(UriTemplate = "Products/API.Service?Product={productName}", Method = "POST")]
            public string APIGet(string productName, Product product1)
    
            // Sending 2 Objects (Wrapping)
            // Added "BodyStyle = WebMessageBodyStyle.WrappedRequest".
            [WebInvoke(UriTemplate = "Products/API.Service?Product={productName}", Method = "POST", BodyStyle = WebMessageBodyStyle.WrappedRequest )]
            public string APIGet(string productName, Product product1, Product product2)
    

    jQuery Ajax Call (Sending 1 object vs 2 objects)

                // Common Objects to use in examples below
                var Product1 = {};
                Product1.Name = 'Name1';
                Product1.Description = 'Desc1';
                Product1.Price = 1.0;
                Product1.Quantity = 1;
                Product1.IsDiscontinued = false;
    
                var Product2 = {};
                Product2.Name = 'Name2';
                Product2.Description = 'Desc2';
                Product2.Price = 2.0;
                Product2.Quantity = 2;
                Product2.IsDiscontinued = false;
    
                //  Sending 1 Object (No Wrapping).
                $.ajax({
                    type: 'POST',
                    url: '/InventoryService/Products/API.Service?Product=ProductXYZ',
                    contentType: 'application/json; charset=utf-8',
                    dataType: 'json',
                    data: JSON.stringify(Product1),  // Here we are sending the 1 object, no wrapper.
                    success: function (response)
                    {
                        // Do Something
                    }
                });
    
                // Sending 2 Objects (Wrapping)
                // Here we are creating a new jsonRequest object that wraps our 2 objects that we want to send to the server.
                jsonRequest = { 'product1': Product1, 'product2': Product2 };
    
                $.ajax({
                    type: 'POST',
                    url: '/InventoryService/Products/API.Service?Product=ProductXYZ',
                    contentType: 'application/json; charset=utf-8',
                    dataType: 'json',
                    data: JSON.stringify(jsonRequest),  // Here we are sending a object that wraps our 2 product objects.
                    success: function (response)
                    {
                        // Do Something
                    }
                });
    

GET / POST / PUT / DELETE WCF Services and their jQuery Ajax Calls

GET Method

Service

        [WebGet(UriTemplate = "Products/{name}")]
        [OperationContract]
        public Product GetProduct(string name)
        {
            // Get a Product
        }

jQuery Call

                    var name = 'ProductXYZ';

                    $.ajax({
                        type: 'GET',
                        url: '/InventoryService/Products/' + name,
                        contentType: 'application/json; charset=utf-8',
                        dataType: 'json',
                        data: '',
                        success: function (response)
                        {
                            // Do Something
                        }
                    });

                    // Alternative to $.ajax for a GET request is using $.getJSON.
                    $.getJSON('/InventoryService/' + name, '', function (result) { // Do Something });

This is probably the simplest service call to issue, but the amazing jQuery team made the process even easier with the getJSON() call that wraps up a ajax() GET request. Notice the svc-less call above, there’s no ugly “.svc” telling your consumers… Hay, I’m using WCF!!!

POST Method

Service

        [WebGet(UriTemplate = "Products/{name}", Method = "POST")]
        [OperationContract]
        public Product GetProduct(string name, Product product)
        {
            // Insert New Product
        }

jQuery Call

                    var name = 'ProductXYZ';
                    var Product = {};
                    Product.Name = 'ProductXYZ';
                    Product.Description = 'Product XYZ Description';

                    $.ajax({
                        type: 'POST',
                        url: '/InventoryService/Products/' + name,
                        contentType: 'application/json; charset=utf-8',
                        dataType: 'json',
                        data: JSON.stringify(Product),
                        success: function (response)
                        {
                            // Do Something
                        }
                    });

This has always been my go to method for sending data to a service, but according to a few RESTful people this method is only supposed to be used for inserts. I’m not really sure how “firm” this rule is, so I decided to do all Inserts and Updates with the POST method, since I almost always handle this in the same function call in my DAL. In addition to posting my Object to the service, I also include the primary key (“PK”) in the service path so I can use the IIS logs to show who touched a specific record.

PUT Method

Service

        [WebInvoke(UriTemplate = "Products/{name}", Method = "PUT")]
        [OperationContract]
        public Product GetProduct(string name)
        {
            // Update Product
        }

jQuery Call

                    var name = 'ProductXYZ';
                    var Product = {};
                    Product.Name = 'ProductXYZ';
                    Product.Description = 'Product XYZ Description';

                    $.ajax({
                        type: 'PUT',
                        url: '/InventoryService/Products/' + name,
                        contentType: 'application/json; charset=utf-8',
                        dataType: 'json',
                        data: JSON.stringify(Product),
                        success: function (response)
                        {
                            // Do Something
                        }
                    });

I don’t use this much, but when I do the calls look exactly the same as a POST except for the change in request type.

DELETE Method

Service

        [WebGet(UriTemplate = "Products/{name}", Method = "DELETE")]
        [OperationContract]
        public bool GetProduct(string name)
        {
            // Delete Product
        }

jQuery Call

                    var name = 'ProductXYZ';

                    $.ajax({
                        type: 'DELETE',
                        url: '/InventoryService/Products/' + name,
                        contentType: 'application/json; charset=utf-8',
                        dataType: 'json',
                        data: '',
                        success: function (response)
                        {
                            // Do Something
                        }
                    });

Since I always like “confirmation” when something has been done, I always return a boolean from my delete services. This allows me to confirm something was able to be deleted, if you expect to have various failure states you might want to consider using a string return type to provide a detailed “error” message whey the request failed (e.g. “Successfully Deleted”, “You don’t have permission.”, “Product is associated with other orders.”).

ASP.NET 4.0 WCF RESTful Services – Part 1

Details, details, details… I just spend a few days moving over to svc-less WCF services with a ASP.NET 4.0 web application, and boy was it fun… The overall setup is pretty easy, once you’ve painfully gone through the process “a few” times. Since this is something I’ll no be doing by default on all new projects, I thought this would make a great write-up tutorial. During my discovery and learning phase, I found a bunch of helpful blog posts but nothing was 100% and there was lots of those “important” bits missing. Since most of my projects consume JSON, I plan to do a 4 part series on setting up a Web Application to support WCF services that will be called via jQuery.

Enough with the background, let’s start by creating a new “ASP.NET Web Application”.

What project to choose in VS 2010

  1. Remove everything inside the Scripts folder.
  2. Right Click on Project, and choose “Manage NuGet Packages…”

    Hopefully your familiar with NuGet, it’s a package manager for Visual Studio and you can install it by going to Tools -> Extension Manager… -> Online Gallery -> Download “NuGet Package Manager”. This is the most popular extension and it allows you to quickly add bits to your projects (e.g. Install jQuery, and be notified when a new version is available.).

  3. Use search to install the following NuGet Packages.
    • jQuery
    • JSON-js json2

    Required NuGet Packages

  4. Edit your “site.master” to and include your new scripts:
    	   <script src="Scripts/jquery-1.7.2.js" type="text/javascript"></script>
    	   <script src="Scripts/jquery-ui-1.8.19.js" type="text/javascript"></script>
    	   <script src="Scripts/jquery.validate.js" type="text/javascript"></script>
    	
  5. Also in the “site.master”, edit your asp:Menu to include 4 pages called Part1 – Part4:
                    <asp:Menu ID="NavigationMenu" runat="server" CssClass="menu" EnableViewState="false" IncludeStyleBlock="false" Orientation="Horizontal">
                        <Items>
                            <asp:MenuItem NavigateUrl="~/Default.aspx" Text="Home"/>
                            <asp:MenuItem NavigateUrl="~/Part1.aspx" Text="Part 1"/>
                            <asp:MenuItem NavigateUrl="~/Part2.aspx" Text="Part 2" />
                            <asp:MenuItem NavigateUrl="~/Part3.aspx" Text="Part 3" />
                            <asp:MenuItem NavigateUrl="~/Part3.aspx" Text="Part 4" />
                            <asp:MenuItem NavigateUrl="~/About.aspx" Text="About" />
                        </Items>
                    </asp:Menu>
    	

    ** We are only going to use Part1.aspx right now, but I plan on 4 posts on this topic.

  6. Add 4 new “Web Form using Master Page” to the project called Page1.aspx, Page2.aspx, Page3.aspx, Page4.aspx.
    ** These will match the named used in the navigation menu in the site.master.
  7. Add a new folder to the root of the project called Service.
  8. Add a new “AJAX-enabled WCF Service” to the Service folder called “InventoryService.svc”.
    • Note, this will add the following references to your project.
      	     System.Runtime.Serialization
      	     System.ServiceModel
      	     System.ServiceModel.Web
      	
    • It will also add the following lines to your Web.config
      	  <system.webServer>
      		 <modules runAllManagedModulesForAllRequests="true"/>
      	  </system.webServer>
      
      	  <system.serviceModel>
      		<behaviors>
      		  <endpointBehaviors>
      			<behavior name="RESTfulWCF.Service.Service1AspNetAjaxBehavior">
      			  <enableWebScript />
      			</behavior>
      		  </endpointBehaviors>
      		</behaviors>
      		<serviceHostingEnvironment aspNetCompatibilityEnabled="true"
      		  multipleSiteBindingsEnabled="true" />
      		<services>
      		  <service name="RESTfulWCF.Service.Service1">
      			<endpoint address="" behaviorConfiguration="RESTfulWCF.Service.Service1AspNetAjaxBehavior"
      			  binding="webHttpBinding" contract="RESTfulWCF.Service.Service1" />
      		  </service>
      		</services>
      	  </system.serviceModel>
      	
    • Change the “system.serviceModel” section of your web.config to the following.
      	    <system.serviceModel>
      	        <serviceHostingEnvironment aspNetCompatibilityEnabled="true"/>
      	        <standardEndpoints>
      	            <webHttpEndpoint>
      	                <standardEndpoint name="" helpEnabled="true" automaticFormatSelectionEnabled="true"/>
      	            </webHttpEndpoint>
      	        </standardEndpoints>
      	    </system.serviceModel>
      	

      Web.Config Key Points

      	// This allows your services to run under your sites app pool, giving access
      	// to your HttpContext.
      	aspNetCompatibilityEnabled="true"
      	
      	// See below, this allows you to type <service>/help to get information on your service	
      	helpEnabled="true"
      	
      	// This is AWESOME, this tag allows your service to respond in any format you 
      	// specify in your request (e.g. XML, JSON, etc...).
      	automaticFormatSelectionEnabled="true"
      	

      ** One last item to note, every time you add an additional AJAX service, it will edit your web.config and put back in the bad configuration. I strongly suggest you make a backup of your web.config, incase you run into problems in the future!!!

    • Manually add a project reference to System.ServiceModel.Activation.
  9. At this point, your project in solution explorer should look like this:

    Items Included in Solution Explorer

  10. Now, open InventoryService.svc and make the following changes:
    • Erase everything and add the following C# code:
      using System.ServiceModel;
      using System.ServiceModel.Activation;
      using System.ServiceModel.Web;
      
      namespace RESTfulWCF.Service
      {
          [ServiceContract(Namespace = "")]
          [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]
          public class InventoryService
          {
              [WebGet(UriTemplate = "Part1")]
              [OperationContract]
              public string Part1GetRequest()
              {
                  return "I did work";
              }
          }
      }
      	

      In the code above, we are mapping the function “Part1GetRequest” to the name “Part1”. This will allow us to call the service with the following syntax “/InventoryService/Part1” using a GET request.

  11. Add a route to call the service without referencing a “.svc” file.
    Open Global.asax and replace your Applicaiton_Start with the following.

            void Application_Start(object sender, EventArgs e)
            {
                // Code that runs on application startup
                RouteTable.Routes.Add(new ServiceRoute("InventoryService", new WebServiceHostFactory(), typeof(Service.InventoryService)));
            }
    	
  12. Now we are ready to create a jQuery test call, open “Page1.aspx” in Source View:

    Erase everything and add the following HTML code

    <%@ Page Title="" Language="C#" MasterPageFile="~/Site.Master" AutoEventWireup="true" CodeBehind="Part1.aspx.cs" Inherits="RESTfulWCF.Part1" %>
    <asp:Content ID="Content1" ContentPlaceHolderID="HeadContent" runat="server">
    </asp:Content>
    <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server">
        <script type="text/javascript">
            $(document).ready(function ()
            {
                $('#doWork').click(function ()
                {
                    $.getJSON('/InventoryService/Part1', '', function (result) { alert(result); });
                });
            });
        </script>
        <h2>
            ASP.NET 4.0 WCF RESTful Demo - Part 1
        </h2>
        <p>
            <input type="button" id="doWork" value="Issue GET Request" />
        </p>
    </asp:Content>
    	

    We are able to use the $.getJSON() jQuery function because we applied the “WebGet” attribute to the WCF function “Part1GetRequest()”.

  13. Launch the application and navigate to “Page 1”. Click on the “Issue GET Request” button and you should see the followign results:

    jQuery Demo Results

  14. To get a list of all the functions your service offers, pass the “/help” parameter to your service.

    List of all WCF Service Accessible by Passing /help to the Service

As noted, here is the project I built above following all the steps one-by-one.
    >> ASP.NET 4.0 RESTful Web Application Project

Making an Editiable Table (CRUD) with jQuery

I’ve been working on a form that needs to support a unknown number of rows, that the user will either enter 1-by-1 or bulk upload. Adding a few rows to a table is pretty easy in HTML, but as the rows increase page starts to slow down and the UI can become overwhelming with HTML input controls. I decided to test a few concepts to determine what would best fit my form, I started with doing rows of input controls, followed by trying to use jqGrid plug-in. I really liked jqGrid, but as I tried to get everything working I found myself having to “work around” lots of issues (read forum, apply fix, something else breaks, repeat), so I gave up after making the solution working 90% in jqGrid because the code was already a lot more complex than what I wanted. In the end, I decided that building my own table editor that supported CRUD with jQuery. In addition to following the KISS rule, I also had a list of goals I wanted to include.

Solution Goals

  1. Allow users to add 1 to 2,000 rows
  2. Keep the page quick when working with 500+ rows
  3. Make all edits in memory
  4. Create a Undo/Cancel button to undo a edit
  5. Capture dynamic HTML TABLE row contents for use in a server side postback
  6. (Not in DEMO) Enable validation for rows in Edit Mode
  7. (Not in DEMO) Enable default values for manually added rows

All of the goals above were in the final solution and 95% of the work is done client-side in ~300 lines of jQuery & JavaScript code. I choose to use jQuery templates for the rows, since it offers a simple model for merging data/HTML along with some advanced features to perform logic in how elements are rendered (e.g. If my Cross object has a Status set, it will display an alert icon on the row and notify the user something in wrong). Since most of these other features were case specific, I left them out of the demo to focus on doing the basic CRUD in HTML and how I got the dynamic rows back to ASP.NET

Final Product

My solution was designed to leverage ASP.NET, but all of the code below is 100% HTML.  You can take this code and apply it to any table and choose to leverage any server technology you want.  Part of step 5 is ASP.NET specific, but this shows a neat trick for getting the HTML table rows back to the server so you can access them in a traditional ASP.NET postback event.

Step 1: Prerequisites (Accessible via CDN)

  1. jQuery
  2. jQuery tmpl plug-in
  3. JSON.org library
    <!-- jQuery on GOOGLE CDN -->
    <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.min.js"></script>
    <!-- JSON.org on CDNJS CDN -->
    <script type="text/javascript" src="http://ajax.cdnjs.com/ajax/libs/json2/20110223/json2.js"></script>
    <!-- jQuery tmpl() plug-in on ASPNET CDN -->
    <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jquery.templates/beta1/jquery.tmpl.js"></script>

Step 2: HTML Layout

<body>
    <h1>
        CRUD My Table</h1>
    <!-- Table where we will perform CRUD operations, data loaded via jQuery tmpl() -->
    <table id="CRUDthisTable" class="mediumTable">
        <thead>
            <tr class="rowHeader">
                <th></th>
                <th>Change Type </th>
                <th>Update Type </th>
                <th>Customer Part </th>
                <th>ROHM Part </th>
                <th>Rank Start </th>
                <th>Rank End </th>
                <th>Price </th>
                <th>UOM </th>
                <th>Apply Date </th>
                <th>Remarks </th>
            </tr>
        </thead>
        <tbody>
        </tbody>
    </table>
    <!-- jQuery tmpl() Templates -->
    <!-- Edit Rows -->
    <script id="editRowTemplate" type="text/x-jquery-tmpl">
        <tr class="editRow">
            <td>
                <span id="cancel" name="cancel" tooltip="Cancel" class="ui-icon ui-icon-close CancelRow">Cancel</span>
                <span id="save" name="save" tooltip="Save" class="ui-icon ui-icon-disk SaveRow">Save</span>
            </td>
            <td>
                <select id="field1" name="field1" class="changeType">
                    <option></option>
                    <option>All</option>
                    <option>Part</option>
                    <option>Price</option>
                </select></td>
            <td>
                <select id="field2" name="field2" class="updateType">
                    <option></option>
                    <option>Add</option>
                    <option>Update</option>
                    <option>Delete</option>
                </select></td>
            <td>
                <input type="text" id="field3" name="field3" class="customerPart required part" value="${CustomerPart}" /></td>
            <td>
                <input type="text" id="field4" name="field4" class="rohmPart validROHMpart part" value="${ROHMPart}" /></td>
            <td>
                <input type="text" id="field5" name="field5" class="rankStart rank" value="${RankStart}" /></td>
            <td>
                <input type="text" id="field6" name="field6" class="rankEnd rank" value="${RankEnd}" /></td>
            <td>
                <input type="text" id="field7" name="field7" class="price required number" value="${Price}" /></td>
            <td>
                <select  id="field8" name="field8" class="uomType required">
                    <option></option>
                    <option>1</option>
                    <option>1000</option>
                </select></td>
            <td>
                <input type="text" id="field9" name="field9" class="applyDate required date" value="${ApplyDate}" /></td>
            <td>
                <input type="text" id="field10" name="field10"class="remarks" value="${Remarks}" /></td>
        </tr>
    </script>
    <!-- View Rows -->
    <script id="viewRowTemplate" type="text/x-jquery-tmpl">
        <tr>
            <td style="width:50px;">
                <span id="edit" name="edit" title="Edit" class="ui-icon ui-icon-pencil EditRow">Edit</span>
                <span id="delete" name="delete" title="Delete" class="ui-icon ui-icon-trash DeleteRow">Delete</span>
            </td>
            <td style="width:120px;">${ChangeType}</td>
            <td style="width:120px;">${UpdateType}</td>
            <td>${CustomerPart}</td>
            <td>${ROHMPart}</td>
            <td style="width:45px;">${RankStart}</td>
            <td style="width:45px;">${RankEnd}</td>
            <td>${Price}</td>
            <td style="width:64px;">${UOM}</td>
            <!-- **** TIP: Here we use a function to format the date mm/dd/yyyy -->
            <td style="width:80px;">${FormatDate(ApplyDate)}</td>
            <td>${Remarks}</td>
        </tr>
    </script>
</body>

Step 3: Example Loading Data ( NO CRUD Functionality )

        // Helper Function to Format Date in View Row
        function FormatDate(date)
        {
            return date.getMonth() + 1 + "/" + date.getDate() + "/" + date.getFullYear();
        }

        // After the DOM has loaded, take the sample data and inject it into the table using the View Row template.
        $(document).ready(function ()
        {
            // Sample Data - Could be returned via AJAX or could be manual rows added to the TABLE
            var crosses = [
            { "ChangeType": "All", "UpdateType": "Add", "CustomerPart": "1SS355TE-17", "ROHMPart": "1SS355TE-17", "RankStart": "", "RankEnd": "", "Price": 0.0151, "UOM": 1, "ApplyDate": new Date(1335337200000), "Remarks": "", "Status": null, "StatusNote": null },
            { "ChangeType": "All", "UpdateType": "Add", "CustomerPart": "RB160M-60TR", "ROHMPart": "RB160M-60TR", "RankStart": "", "RankEnd": "", "Price": 0.0605, "UOM": 1, "ApplyDate": new Date(1335337200000), "Remarks": "", "Status": null, "StatusNote": null },
            { "ChangeType": "All", "UpdateType": "Add", "CustomerPart": "RR264M-400TR", "ROHMPart": "RR264M-400TR", "RankStart": "", "RankEnd": "", "Price": 0.031, "UOM": 1, "ApplyDate": new Date(1335337200000), "Remarks": "", "Status": null, "StatusNote": null },
            { "ChangeType": "All", "UpdateType": "Add", "CustomerPart": "1SR154-400TE25", "ROHMPart": "1SR154-400TE25", "RankStart": "", "RankEnd": "", "Price": 0.0309, "UOM": 1, "ApplyDate": new Date(1335337200000), "Remarks": "", "Status": null, "StatusNote": null },
            { "ChangeType": "All", "UpdateType": "Add", "CustomerPart": "RF071M2STR", "ROHMPart": "RF071M2STR", "RankStart": "", "RankEnd": "", "Price": 0.0638, "UOM": 1, "ApplyDate": new Date(1335337200000), "Remarks": "", "Status": null, "StatusNote": null}];

            if (crosses) {
                $("#viewRowTemplate").tmpl(crosses).appendTo("#CRUDthisTable");
            }
        });

CRUD Table – Data Loaded no CRUD Functions Activated

Demo of 100% CRUD Table in HTML ( no persistance or CRUD features activated )

** As you can see, I’m not using images links similar to the final product since I was aiming for simplicity.  If you want to use images, I suggest you use the jQuery UI icons as I did in the final product, they can easily be added to a span by adding two class values (e.g. class=”ui-icon ui-icon-close”).

Step 4: Enable CRUD

        // Global Parameters
        var rowNum = 1;
        var rowRemovedNum;
        var rowRemovedContents;

        // Read a row in Edit Mode into a Cross Object
        function GetEditRowObject()
        {
            var row = $('#CRUDthisTable tbody tr.editRow');

            var cross = {};

            cross.ChangeType = row.find('.changeType').val();
            cross.UpdateType = row.find('.updateType').val();
            cross.CustomerPart = row.find('.customerPart').val();
            cross.ROHMPart = row.find('.rohmPart').val();
            cross.RankStart = row.find('.rankStart').val();
            cross.RankEnd = row.find('.rankEnd').val();
            cross.Price = row.find('.price').val();
            cross.UOM = row.find('.uomType').val();
            var dateString = row.find('.applyDate').val();
            cross.ApplyDate = new Date(dateString);
            cross.Remarks = row.find('.remarks').val();

            return cross;
        }

        // Read a row in View Mode into a Cross Object
        function GetViewRowObject(rowNum)
        {
            var row = $('#CRUDthisTable tbody tr').eq(rowNum);

            var cross = {};

            cross.ChangeType = row.find('td:eq(1)').text();
            cross.UpdateType = row.find('td:eq(2)').text();
            cross.CustomerPart = row.find('td:eq(3)').text();
            cross.ROHMPart = row.find('td:eq(4)').text();
            cross.RankStart = row.find('td:eq(5)').text();
            cross.RankEnd = row.find('td:eq(6)').text();
            cross.Price = row.find('td:eq(7)').text();
            cross.UOM = row.find('td:eq(8)').text();
            cross.ApplyDate = row.find('td:eq(9)').text();
            cross.Remarks = row.find('td:eq(10)').text();

            return cross;
        }

        // Read all rows into Cross Object Array
        function GetAllViewRowsAsCrossObjects()
        {
            var crossTableRows = [];

            $('#CRUDthisTable tbody tr').each(function (index, value)
            {
                var row = GetViewRowObject(index);
                crossTableRows.push(row);
            });

            return crossTableRows;
        }

        // Check if any rows are in Edit Mode
        function IsExistingRowInEditMode()
        {
            var rowsInEditMode = $('#CRUDthisTable tbody tr.editRow').length;

            if (rowsInEditMode > 0) {
                alert('You have a row in Edit mode, please save or cancel the row changes before you continue.');
                return true;
            }

            return false;
        }

        // After the DOM has loaded, bind the CRUD events
$(document).ready(function ()
{
// Sample Data - Could be returned via AJAX or could be manual rows added to the TABLE
var crosses = [
{ "ChangeType": "All", "UpdateType": "Add", "CustomerPart": "1SS355TE-17", "ROHMPart": "1SS355TE-17", "RankStart": "", "RankEnd": "", "Price": 0.0151, "UOM": 1, "ApplyDate": new Date(1335337200000), "Remarks": "", "Status": null, "StatusNote": null },
{ "ChangeType": "All", "UpdateType": "Add", "CustomerPart": "RB160M-60TR", "ROHMPart": "RB160M-60TR", "RankStart": "", "RankEnd": "", "Price": 0.0605, "UOM": 1, "ApplyDate": new Date(1335337200000), "Remarks": "", "Status": null, "StatusNote": null },
{ "ChangeType": "All", "UpdateType": "Add", "CustomerPart": "RR264M-400TR", "ROHMPart": "RR264M-400TR", "RankStart": "", "RankEnd": "", "Price": 0.031, "UOM": 1, "ApplyDate": new Date(1335337200000), "Remarks": "", "Status": null, "StatusNote": null },
{ "ChangeType": "All", "UpdateType": "Add", "CustomerPart": "1SR154-400TE25", "ROHMPart": "1SR154-400TE25", "RankStart": "", "RankEnd": "", "Price": 0.0309, "UOM": 1, "ApplyDate": new Date(1335337200000), "Remarks": "", "Status": null, "StatusNote": null },
{ "ChangeType": "All", "UpdateType": "Add", "CustomerPart": "RF071M2STR", "ROHMPart": "RF071M2STR", "RankStart": "", "RankEnd": "", "Price": 0.0638, "UOM": 1, "ApplyDate": new Date(1335337200000), "Remarks": "", "Status": null, "StatusNote": null}];

if (crosses) {
$("#viewRowTemplate").tmpl(crosses).appendTo("#CRUDthisTable");
}

// Events
$('.AddRow').click(function()
{
if (IsExistingRowInEditMode())
return;

rowRemovedNum = 0;

var data = { data: 1 };
var output = $("#editRowTemplate").tmpl(data).html()

$('#CRUDthisTable tbody').prepend('<tr>' + output + '</tr>');
var $rowEdit = $('#CRUDthisTable tbody tr.editRow');

$('#CRUDthisTable tbody tr:first')[0].scrollIntoView();
});

$('.EditRow').live('click', function(e)
{
if (IsExistingRowInEditMode())
return;

var row = $(this).parent().parent().parent().children().index($(this).parent().parent());

var data = GetViewRowObject(row);

var output = $("#editRowTemplate").tmpl(data).html()

rowRemovedNum = row;
rowRemovedContents = $('#CRUDthisTable tbody tr').eq(row).html();

$('#CRUDthisTable tbody tr').eq(row).after('<tr>' + output + '</tr>');

var changeTypeDefualt = $('#ChangeTypeDefualt').val();
var updateTypeDefault = $('#UpdateTypeDefault').val();
var uomDefault = $('#UOMDefault').val();
var applyDateDefault = $('#ApplyDateDefault').val();

var $editRow = $('#CRUDthisTable tbody tr.editRow');

var changeType = $editRow.find('.changeType');
$(changeType).val(data.ChangeType);

var updateType = $editRow.find('.updateType');
$(updateType).val(data.UpdateType);

var uomType = $editRow.find('.uomType');
$(uomType).val(data.UOM);

$('#CRUDthisTable tbody tr').eq(row).remove();
});

$('.SaveRow').live('click', function(e)
{
// Good place to add validation, don't allow save until the row has valid data!
// var isValid = ValidateNestedControls("#CRUDthisTable");
// if (!isValid)
//     return;

var savedData = GetEditRowObject();

var row = $(this).parent().parent().parent().children().index($(this).parent().parent());

var output = $("#viewRowTemplate").tmpl(savedData).html();

var tableRows = $('#CRUDthisTable tbody tr').length;

if (tableRows == 0 || row == 0) {
$('#CRUDthisTable tbody').prepend('<tr>' + output + '</tr>');
}
else {
$('#CRUDthisTable tbody tr').eq(row - 1).after('<tr>' + output + '</tr>');
}

$('#CRUDthisTable tbody tr').eq(row + 1).remove();
});

$('.CancelRow').live('click', function(e)
{
var row = $(this).parent().parent().parent().children().index($(this).parent().parent());

$('#CRUDthisTable tbody tr').eq(row).remove();

var tableRows = $('#CRUDthisTable tbody tr').length;

if (rowRemovedContents) {
if (tableRows == 0 || row == 0) {
$('#CRUDthisTable tbody').prepend('<tr>' + rowRemovedContents + '</tr>');
}
else {
$('#CRUDthisTable tbody tr').eq(row - 1).after('<tr>' + rowRemovedContents + '</tr>');
}
}

rowRemovedContents = null;
});

$('.DeleteRow').live('click', function(e)
{
e.preventDefault;
$(this).parent().parent().remove();
});
});

Table is shows the results of clicking the Add New Row in the Table Footer

Step 5: Ajax POST Table Contents to the Server (before button event)

There is a ton of ways to do this, but my goal was to allow users to edit the table and when they were all done with all their edits they could hit “Save” and everything would then be written to the DB. Since ASP.NET doesn’t give you access to dynamic table rows, I bound a AJAX post event to the “Save” button that sends the table contents to the server, stores in cache, and then uses the cache in the traditional postback “Save” event.

        // After the DOM has loaded, bind the ASP.NET save button
        $(document).ready(function ()
            $('#<%= btnSave.ClientID %>').click(function (e) {
                return PostTable();
            });
        }

        // Post all rows to the server and put into Cache
        function PostTable()
        {
            // Normally I'll get the ID from the QueryString, but it could also be grabbed from a hidden element in the form.
            var crossId = 1;
            var jsonRequest = { crosses: GetAllViewRowsAsCrossObjects(), crossId: crossId };

            $.ajax({
                type: 'POST',
                url: 'Demo.aspx/CacheTable',
                data: JSON.stringify(jsonRequest),
                contentType: 'application/json; charset=utf-8',
                dataType: 'json',
                success: function (data, text)
                {
                    return true;
                },
                error: function (request, status, error)
                {
                    return false;
                }
            });
        }

Important Note: If you want to access a page method via jQuery $.ajax(), then you must make the function static and pass the case sensitive parameters with the expected data type(s) in the ajax call.

    public partial class Demo: System.Web.UI.Page
    {
        private static string _cacheKey = "CacheTable_" + HttpContext.Current.User.Identity.Name;

        [WebMethod]
        public static void CacheTable(List<Cross> crosses, int crossId)
        {
            if (crosses != null && crosses.Count > 0)
            {
                HttpContext.Current.Cache.Remove(_cacheKey);
                HttpContext.Current.Cache.Insert(_cacheKey, crosses, null, DateTime.Now.AddSeconds(3600), Cache.NoSlidingExpiration);
            }
        }
    }

    // Custom Data Transfer Object (DTO)
    public class Cross
    {
        public string ChangeType { get; set; }
        public string UpdateType { get; set; }
        public string CustomerPart { get; set; }
        public string ROHMPart { get; set; }
        public string RankStart { get; set; }
        public string RankEnd { get; set; }
        public double Price { get; set; }
        public int UOM { get; set; }
        public DateTime ApplyDate { get; set; }
        public string Remarks { get; set; }
        public string Status { get; set; }
        public string StatusNote { get; set; }
    }

Working Demo of using jQuery to allow CRUD edits to a HTML TABLE.

ASP.NET Note **If you run into issues on the amount of rows you can postback to the server in ASP.NET via AJAX & JSON, you’ll need to edit your “maxJsonLength” in your web.config.

    <system.web.extensions>
        <scripting>
            <webServices>
                <jsonSerialization maxJsonLength="2097152"/>
            </webServices>
        </scripting>
    </system.web.extensions>

Password Generator and Callsign Spelling with C#

If you need help building a random password, you should check out pctools.com random password generator. It’s great, it gives you a lot of options and you can have it generate a list of 50 passwords with their callsign spellings in seconds. I’ve found the callsign spelling to be very helpful for remembering and recognizing all the characters in a new password I generate. I liked this solution so much, I decided port this concept over to C# with a set of helpers. This can be used anywhere you want to generate a password, I am currently using it in a ASP.NET LOB app to suggest and show better passwords options.

Generating Random Passwords

using System;
using System.Collections.Generic;
using System.Linq;

public static class StringHelper
{
    // Shared Static Random Generator
    private static readonly Random CommonRandom = new Random();

    public static string GenerateRandomPassword(int passwordLength, bool canRepeatCharacters = false)
    {
        char[] chars = "$%#@!*abcdefghijklmnopqrstuvwxyz1234567890?;:ABCDEFGHIJKLMNOPQRSTUVWXYZ^&".ToCharArray();

        string randomPassword = string.Empty;

        for (int l = 0; l < passwordLength; l++)
        {
            int x = CommonRandom.Next(1, chars.Length);

            if (canRepeatCharacters || !randomPassword.ToCharArray().Any(ch => ch == chars[x]))
                randomPassword += chars[x].ToString();
            else
                l--;
        }

        return randomPassword;
    }

    public static List<string> GenerateRandomPasswords(int quantity, int passwordLength = 8)
    {
        List<string> passwords = new List<string>();

        for (int i = 0; i < quantity; i++)
        {
            passwords.Add(GenerateRandomPassword(passwordLength));
        }

        return passwords;
    }
}

There are a few options on the generator, like not repeating a character and configuring the password length. In addition to the main method, I also have a helper that also returns a list of multiple passwords, to return in a list to give your users options.

Important note, Random() is not very random when your making quick consecutive calls. If you want to call Random() in a loop, you should move your instance of the Random() class to the outside to prevent duplicate seeds, which will result in duplicate passwords.

    public static string GetCallsignSpelling(string password)
    {
        if (string.IsNullOrEmpty(password))
            return string.Empty;

        Dictionary<char, string> callsigns = new Dictionary<char, string>()
            {
                {'$',"Dollar Symbol"},
                {'%',"Percent Symbol"},
                {'#',"Number Symbol"},
                {'@',"At Symbol"},
                {'!',"Exclamation Symbol"},
                {'*',"Asterisk Symbol"},
                {'a',"alpha"},
                {'b',"bravo"},
                {'c',"charlie"},
                {'d',"delta"},
                {'e',"echo"},
                {'f',"foxtrot"},
                {'g',"golf"},
                {'h',"hotel"},
                {'i',"india"},
                {'j',"juliet"},
                {'k',"kilo"},
                {'l',"lima"},
                {'m',"mike"},
                {'n',"november"},
                {'o',"oscar"},
                {'p',"papa"},
                {'q',"quebec"},
                {'r',"romeo"},
                {'s',"sierra"},
                {'t',"tango"},
                {'u',"uniform"},
                {'v',"victor"},
                {'w',"whiskey"},
                {'x',"xray"},
                {'y',"yankee"},
                {'z',"zulu"},
                {'1',"One"},
                {'2',"Two"},
                {'3',"Three"},
                {'4',"Four"},
                {'5',"Five"},
                {'6',"Six"},
                {'7',"Seven"},
                {'8',"Eight"},
                {'9',"Nine"},
                {'0',"Zero"},
                {'?',"Question Symbol"},
                {';',"SemiColon Symbol"},
                {':',"Colon Symbol"},
                {'A',"ALPHA"},
                {'B',"BRAVO"},
                {'C',"CHARLIE"},
                {'D',"DELTA"},
                {'E',"ECHO"},
                {'F',"FOXTROT"},
                {'G',"GOLF"},
                {'H',"HOTEL"},
                {'I',"INDIA"},
                {'J',"JULIET"},
                {'K',"KILO"},
                {'L',"LIMA"},
                {'M',"MIKE"},
                {'N',"NOVEMBER"},
                {'O',"OSCAR"},
                {'P',"PAPA"},
                {'Q',"QUEBEC"},
                {'R',"ROMEO"},
                {'S',"SIERRA"},
                {'T',"TANGO"},
                {'U',"UNIFORM"},
                {'V',"VICTOR"},
                {'W',"WHISKEY"},
                {'X',"XRAY"},
                {'Y',"YANKEE"},
                {'Z',"ZULU"},
                {'^',"Caret Symbol"},
                {'&',"Ampersand Symbol"}
            };

        char[] wordCharacters = password.ToCharArray();

        string callsignSpelling =
            wordCharacters.Aggregate(string.Empty,
                                         (current, passwordCharacter) =>
                                         current + (callsigns[passwordCharacter] + " - ")).TrimEnd(' ', '-');

        return callsignSpelling;
    }

The spelling is done using a Key/Value dictionary, and iterating over each character of the password one-by-one.

The result of using these two helpers is below.

JSON to HTML Form Using jQuery dForm Plug-in

In a previous posts, I’ve showed how to go from a JSON array of objects/values, to a HTML table. This is great when you want to display a bunch of data in column and rows, but what happens if you want to interact with the data. No problem, there is the jQuery dForm plug-in for that. In order to generate the form, you’ll need to redesign your server side / inline objects to provide the required rendering data for dForms. It’s pretty straight forward and there is a ton of options with the plug-in, see here.

Demo Screenshot

In the image above, we use an inline object that will feed the dForm “buildForm()” method. I’m using a static value here, but you could have easily setup dForm to load the data remotely using ajax.

Here is the jQuery dForm 0.1.3 Plug-in Demo used in the screenshot above, which includes everything you need to get up and running fast. If you have problems making dForm work, make sure you have included “ALL” the dForm libraries. The dForm.js file has dependencies on the other libraries, if you don’t load the correct libraries, you’ll end up with a blank form.

Reusable ASP.NET MessageBox UserControl

All systems need a way to share status messages with a user, without giving them feedback they don’t what has or what will happen. There are so many different ways to send an alert (modal window, JavaScript alert, inline HTML, etc…), regardless of what you use it always help to be consistent. I built this control based on a few different ideas/examples a long time ago, and I seem to find more uses for it all the time. You can call it from the server or from the client using JavaScript, making it the perfect single solution “notification” solution.

Here is an example of what it looks like.

ASP.NET MessageBox User Control Screenshot

** The text after the heading, is the code that was used to trigger the message box.

The control is pretty straight forward, on the server it works by “re-injecting” the rendered control when you call Show function. Since the control does not use viewstate, every time you post back to the server and call show again, the last message disappears and the new message is displayed. If you want the message to disappear “automatically” after a few seconds, you can can set the timeout in milliseconds. On the client (via JavaScript), you can create a simple function that will provide access to throwing alerts from the client without a postback.

Client Side Example

    <script type="text/javascript">
        function ShowSuccess(message) {
            $alert = $('#MBWrapper1');

            $alert.removeClass().addClass('MessageBoxInterface');
            $alert.children('p').remove();
            $alert.append('<p>' + message + '</p>').addClass('successMsg').show().delay(8000).slideUp(300);
        }
    </script>

Server Side Example

    public partial class _Default : System.Web.UI.Page
    {
        protected void Success_Click(object sender, EventArgs e)
        {
            MessageBox1.ShowSuccess("Success, page processed.", 5000);
        }
    }

ASP.NET MessageBox User Control – Full Working Demo

Download and try the demo, if you like what you see… Here is what you need to include to make the control work in your project.

  • MessageBox User Control (ASMX & CS)
  • jQuery
  • IMessageBox.css stylesheet
  • All the graphics from the Images folder

** Note: If you move the images into a different directory, you’ll need to update the CSS to use the correct path to the images.