Reusable ASP.NET MessageBox UserControl

All systems need a way to share status messages with a user, without giving them feedback they don’t what has or what will happen. There are so many different ways to send an alert (modal window, JavaScript alert, inline HTML, etc…), regardless of what you use it always help to be consistent. I built this control based on a few different ideas/examples a long time ago, and I seem to find more uses for it all the time. You can call it from the server or from the client using JavaScript, making it the perfect single solution “notification” solution.

Here is an example of what it looks like.

ASP.NET MessageBox User Control Screenshot

** The text after the heading, is the code that was used to trigger the message box.

The control is pretty straight forward, on the server it works by “re-injecting” the rendered control when you call Show function. Since the control does not use viewstate, every time you post back to the server and call show again, the last message disappears and the new message is displayed. If you want the message to disappear “automatically” after a few seconds, you can can set the timeout in milliseconds. On the client (via JavaScript), you can create a simple function that will provide access to throwing alerts from the client without a postback.

Client Side Example

    <script type="text/javascript">
        function ShowSuccess(message) {
            $alert = $('#MBWrapper1');

            $alert.removeClass().addClass('MessageBoxInterface');
            $alert.children('p').remove();
            $alert.append('<p>' + message + '</p>').addClass('successMsg').show().delay(8000).slideUp(300);
        }
    </script>

Server Side Example

    public partial class _Default : System.Web.UI.Page
    {
        protected void Success_Click(object sender, EventArgs e)
        {
            MessageBox1.ShowSuccess("Success, page processed.", 5000);
        }
    }

ASP.NET MessageBox User Control – Full Working Demo

Download and try the demo, if you like what you see… Here is what you need to include to make the control work in your project.

  • MessageBox User Control (ASMX & CS)
  • jQuery
  • IMessageBox.css stylesheet
  • All the graphics from the Images folder

** Note: If you move the images into a different directory, you’ll need to update the CSS to use the correct path to the images.

ASP.NET Lists Elements and Client Side Updates

A common issue I see people run into with Web Forms is the inability for client side changes to LIST elements (ListBox, DropDownList, etc…) to be passed back during POSTBACK. This is due to the how the viewstate is assigned/process, and there is no simple “hack” to bypass this. As for other non-list elements that typically implement a Text property, this isn’t an issue. Here are 3 common ways to get around the limitation in seeing changes to your LIST elements that were made on the client using jQuery/JavaScript.

1. Create a hidden field (that has a text property)

<select size="5" id="users">
    <option value="Kelly">Kelly</option>
    <option value="Kylee">Kylee</option>
    <option value="Nathan">Nathan</option>
</select>

<input type="hidden" id="userList" runat="server" value="Kelly,Kylee,Nathan"/>

Here is how you would add/parse the client side change.

var $users = $('#users');
var $usersList = $('#usersList');

$users.append($('<option></option>').val('New Entry Value').html('New Entry Text'));

var newUserList = '';
var $options = $('#users').children('option');

$.each($options, function(index, value) {
	newUserList = newUserList + $options[index].value + ',';
}); 			

$usersList.val(newUserList.substr(0, newUserList.length-1));

** In this example, you maintain a hidden input element that matches the items in the select list. When you change the select list in JavaScript, you change the value of the hidden element. During post back, you read and parse this list to make your updates.

2. Wrap your lists in an update panel

<asp:UpdatePanel ID="UpdateRoles" runat="server">
    <ContentTemplate>
        <table>
            <tr>
                <td style="height: 180px; vertical-align: top;">
                    Active Roles<br />
                    <asp:ListBox ID="lstAdd" runat="server" Width="300px" OnSelectedIndexChanged="lstAdd_SelectedIndexChanged" Rows="10"></asp:ListBox>
                </td>
                <td style="height: 180px">
                    <asp:Button ID="AddRole" runat="server" Text="->" OnClick="AddRole_Click1" />
                    <br />
                    <asp:Button ID="RemoveRole" runat="server" Text="<-" OnClick="RemoveRole_Click" />
                </td>
                <td style="height: 180px; vertical-align: top;">
                    InActive Roles<br />
                    <asp:ListBox ID="lstRemove" runat="server" Width="300px" OnSelectedIndexChanged="lstRemove_SelectedIndexChanged" Rows="10"></asp:ListBox>
                </td>
            </tr>
        </table>
    </ContentTemplate>
</asp:UpdatePanel>

Here are the events that are fired when you click the Add / Remove buttons the move items between the lists.

        protected void AddRole_Click1(object sender, EventArgs e)
        {
            if (lstAdd.SelectedIndex > -1)
            {
                lstRemove.Items.Add(lstAdd.SelectedItem);
                lstAdd.Items.RemoveAt(lstAdd.SelectedIndex);
            }
        }

        protected void RemoveRole_Click(object sender, EventArgs e)
        {
            if (lstRemove.SelectedIndex > -1)
            {
                lstAdd.Items.Add(lstRemove.SelectedItem);
                lstRemove.Items.RemoveAt(lstRemove.SelectedIndex);
            }
        }


** You could fire the events using any type of event, in the sample I highlighting a item in one of the two lists and click Add/Remove to switch the items between the lists.

3. Post back using Ajax

This is currently my preferred approach, since it ride on top of jQuery Ajax and follows my normal design approach to building widgets (ajax load / add / edit / delete). Using this approach you can do real-time updates as they edit the list, or you can post back your entire form including all your list changes without a post back. There are so many options when you start using ajax calls to pass data back and forth between the client and server.

My example below is using a WCF service, you can also use ASMX to do this in earlier versions of .NET. This solution has a lot more steps, but it really encompasses a whole “form submit” vs. propagating your LIST changes to the server.

To start, I also build some simple objects to pass between the client & server. The objects go both ways and are defined as a DataContract in C#.

    [DataContract]
    public class Member
    {
        [DataMember]
        public int MemberId;

        [DataMember]
        public string CustomerCode;

        [DataMember]
        public string DelToCode;
    }

    [DataContract]
    public class Rule
    {
        [DataMember]
        public int RuleId;

        [DataMember]
        public string Name;

        [DataMember]
        public bool Enabled;

        [DataMember]
        public List<Member> Members;
    }

One we have an opject to pass, we define a method that will use this object.

    [OperationContract]
    [WebInvoke(
        RequestFormat = WebMessageFormat.Json,
        ResponseFormat = WebMessageFormat.Json,
        BodyStyle = WebMessageBodyStyle.WrappedRequest)
    ]
    bool UpdateRule(Rule rule);

    public bool UpdateRule(Rule rule)
    {
        // Do Update Using Object
    }

Next we build a form that will be used with our server, everything on the form can be build during page load or it can be populate via Ajax.

<fieldset class="withborder" id="manageRule">
<legend>Rule Update</legend>
<ol>
	<li>
		<label class="left">
			Rule ID<em>*</em></label>
		<input type="text" class="medium disabled" id="ruleId">
	</li>
	<li>
		<label class="left">
			Name<em>*</em></label>
		<input type="text" class="medium" maxlength="5" id="ruleName">
	</li>
	<li>
		<label class="left">
			Enabled<em>*</em></label>
		<input type="checkbox" class="checkBox" id="ruleEnabled" checked="checked">
	</li>
	<li>
		<label class="left">
			Assigned<em>*</em></label>
		<div>
			<div class="dropZone available">
				<ul class="connectedSortable ui-sortable" id="available" style="">
					<li id="MemberId_1" class="ui-state-highlight">Red</li>
					<li id="MemberId_2" class="ui-state-highlight">Green</li>
				</ul>
			</div>
			<div class="dropZone included">
				<ul class="connectedSortable ui-sortable" id="included" style="">
					<li id="MemberId_3" class="ui-state-highlight">Blue</li>
					<li id="MemberId_4" class="ui-state-highlight">Yellow</li>
					<li id="MemberId_5" class="ui-state-highlight">Orange</li>
				</ul>
			</div>
		</div>
	</li>
	<li style="text-align: right;">
		<input type="button" value="Update Rule" text="Add" id="updateRule">
	</li>
</ol>
</fieldset>

Finally, we put it all together with JavaScript function that uses jQuery and JSON.org’s libraries to build and transfer data between the client and sever.

    function UpdateRule() {
        var rule;

        rule.RuleId = parseInt($('#ruleId').val());
        rule.Name = $('#ruleName').val();
        rule.Enabled = $('#ruleEnabled').is(':checked');

        var $includedMembers = $('#included > li');

        rule.Members = [];

        // This is where you parse / add the values of your list (listbox, li, etc...)
        if ($includedMembers.length > 0) {
            $.each($includedMembers, function (index, value) {
                var id = $includedMembers[index].id.replace('MemberId_', '');
                var name = $includedMembers[index].firstChild.data.split(' - ');
                var member = { 'CustomerCode': name[0], 'DelToCode': name[1], 'MemberId': id };
                rule.Members.push(member);
            });
        }

        $.ajax({
            url: '/Service/ExampleWCFService.svc/UpdateRule',
            contentType: 'application/json; charset=utf-8',
            dataType: 'json',
            type: 'POST',
            data: JSON.stringify({ 'rule': rule }),
            success: function (rule) {
                // Do Something On Success
            }
        });
    }

** Don’t forget, you could have your service store this data in the cache/session/etc… if you don’t want to store it real-time. In my sample, I send the whole form to be process and not just the list. This gets rid of the entire post back process.

Each has it’s own unique requirements, the only one I tend not to use anymore is #2 since it requires the addition of the ASP.NET Ajax Library. Having an additional library and added content in a project for this one feature isn’t worth it to me. I know the library has got more compact, but in general it’s quite heavy (size wise), and since I already “always” include jQuery there really isn’t a need for this baggage.

JSON to CSV

I created a helper a few months back that used DATA URIs to download JSON to CSV, but due to IE’s implementation of DATA URIs (or lack of), it does not work for IE (all versions). Here is the same helper that will just convert the data, which you can use anyway you want (example: in a popup, to display in a modal window, etc…).

<html>
<head>
    <title>Demo - Covnert JSON to CSV</title>
    <script type="text/javascript" src="http://code.jquery.com/jquery-latest.js"></script>
    <script type="text/javascript" src="https://github.com/douglascrockford/JSON-js/raw/master/json2.js"></script>
    
    <script type="text/javascript">
		// JSON to CSV Converter
        function ConvertToCSV(objArray) {
            var array = typeof objArray != 'object' ? JSON.parse(objArray) : objArray;
            var str = '';

            for (var i = 0; i < array.length; i++) {
                var line = '';
                for (var index in array[i]) {
                    if (line != '') line += ','

                    line += array[i][index];
                }

                str += line + '\r\n';
            }

            return str;
        }
		
		// Example
        $(document).ready(function () {
		
			// Create Object
            var items = [
				  { name: "Item 1", color: "Green", size: "X-Large" },
				  { name: "Item 2", color: "Green", size: "X-Large" },
				  { name: "Item 3", color: "Green", size: "X-Large" }];

			// Convert Object to JSON
			var jsonObject = JSON.stringify(items);
				
			// Display JSON
            $('#json').text(jsonObject);
			
			// Convert JSON to CSV & Display CSV
            $('#csv').text(ConvertToCSV(jsonObject));
        });	
    </script>
</head>
<body>
    <h1>
        JSON</h1>
    <pre id="json"></pre>
    <h1>
        CSV</h1>
    <pre id="csv"></pre>
</body>
</html>

Here is the output using the script and above.

JSON

Created with “JSON.stringify()” function from json.org.

[{"name":"Item 1","color":"Green","size":"X-Large"},{"name":"Item 2","color":"Green","size":"X-Large"},{"name":"Item 3","color":"Green","size":"X-Large"}]

CSV

Created with “ConvertToCSV()” function I created above.

Item 1,Green,X-Large
Item 2,Green,X-Large
Item 3,Green,X-Large

Compare SubSonic Objects (Columns) for Differences

Awhile back I needed a way to quickly compare a set of SubSonic v2.2 objects ( data properties ) for differences. Since the two source rows of data would be created at different times and have different primary keys, I needed a way to selectively compare columns/properties. My solution, was a function called “ColumnsAreEqual” that I added to “RecordBase.cs” that compares each property value of two SubSonic Objects, while allowing you to set a list of columns/properties to exclude.

        /// <summary>
        /// Compare values of two SubSonic objects for differences.  You can also pass an exclusion list 
        /// of columns/properties to exclude,  like a primary key or CreatedOn date.
        /// </summary>
        /// <param name="comparableObject">Object to Compare</param>
        /// <param name="ignoreColumnList">Comma separated list of ColumnNames to ignore</param>
        /// <returns></returns>
        public bool ColumnsAreEqual(T comparableObject, string ignoreColumnList = null)
        {
            if (comparableObject == null)
                return false;

            // If ignore list is set, build array of column names to skip
            string[] ignoreColumnNames = new string[] {};
            if (ignoreColumnList != null && ignoreColumnList.IndexOf(',') > 0)
            {
                ignoreColumnNames = ignoreColumnList.Split(',');
            }

            foreach (TableSchema.TableColumnSetting setting in columnSettings)
            {
                // If there are columns to ignore, check the current ColumnName against ignore list and skip.
                if (ignoreColumnNames.Length > 0)
                {
                    bool ignored = false;

                    foreach (string ignoreColumnName in ignoreColumnNames)
                    {
                        if (ignoreColumnName.Trim().ToLower() == setting.ColumnName.ToLower())
                        {
                            ignored = true;
                            break;
                        }
                    }

                    // If this is a ignore column, skip.
                    if (ignored)
                    {
                        continue;
                    }
                }

                var before = setting.CurrentValue;
                var after = comparableObject.GetColumnValue<object>(setting.ColumnName);

                // If both are null, then they are equal
                if (before == null && after == null)
                    continue;

                // If one is null then they are not equal
                if (before == null || after == null)
                    return false;

                // Compare two non-null objects
                if (!before.Equals(after))
                    return false;
            }

            return true;
        }

// Example Usage
bool areEqual = before.Packaging.ColumnsAreEqual(after.Packaging,"PackagingId,CreatedOn,ModifiedOn");

Optional Column Updates with your Stored Procedures

Recently, I needed a way to quickly update some bulk data based on a part number… I started by building 2 stored procedure that each updated different columns, but a few minutes later I realized I was going to need at least 2 more permutations of my update procedure since there were more columns combinations I needed to update. I didn’t want to maintain a bunch of SP for different types of updates, I just wanted a handy update SP that would update a column/field with data when I passed it in or ignored it if left it blank/null.

I came up with a T-SQL Optional Update Parameter solution, which is based on using the “COALESCE” function along with typed null values. Normally, you pass in a bunch of values and it returns the first non null value, but… There is a hidden gem, you can pass in typed null values and it will return null… Knowing this, I created the code below.

-- CREATE DEMO TABLE
CREATE TABLE [dbo].[Product](
	[PartNumber] [nvarchar](20) NULL,
	[Description] [nchar](20) NULL,
	[Comments] [nchar](20) NULL
) ON [PRIMARY];

GO

-- DEMO TABLE DATA
PRINT '';
PRINT 'INSERT SAMPLE DATA';
PRINT '---------------------------------------------------------- '
INSERT INTO [PRODUCT] VALUES (N'PART1', N'PART 1 Description', N'PART 1 Comment');

GO

-- CREATE DEMO PROCEDURE
CREATE PROCEDURE [dbo].[spProduct_Update]
	@PartNumber AS NVARCHAR(20),
	@Description AS NCHAR(20),
	@Comments AS NCHAR(20)
AS
BEGIN
	DECLARE @BEFORE AS NVARCHAR(200);
	DECLARE @AFTER AS NVARCHAR(200);

	SELECT @BEFORE = 'BEFORE: ' + [PartNumber] + '  |  ' + [Description] + '  |  ' + [Comments] FROM [Product] WHERE [PartNumber] = @PartNumber;

	UPDATE [Product]
		SET [Description] = COALESCE(@Description,[Description]),
			[Comments] = COALESCE(@Comments,[Comments])
	WHERE [PartNumber] = @PartNumber;

	SELECT @AFTER = ' AFTER: ' + [PartNumber] + '  |  ' + [Description] + '  |  ' + [Comments] FROM [Product] WHERE [PartNumber] = @PartNumber;

	PRINT @BEFORE;
	PRINT @AFTER;

END

GO

-- Fails : Both values are non-typed null values
PRINT '';
PRINT '';
PRINT 'FAIL = COALESCE(null,null)';  --2 non-typed null value

GO

DECLARE @MyField AS NVARCHAR(50);
SET @MyField = COALESCE(null, null);
PRINT @MyField;

GO

-- Pass : The second value is a typed null value
PRINT '';
PRINT '';
PRINT 'PASS = COALESCE(null,<typed null parameter>)';  --1 non-typed null value, 1 typed null value

GO

DECLARE @MyField AS NVARCHAR(50);
DECLARE @MyTypedParameter AS NVARCHAR(50);
SET @MyField = COALESCE(null, @MyTypedParameter);
PRINT @MyField;

GO

-- Using the COALESCE with a typed parameter to create an optional "column" update.
PRINT '';
PRINT '---------------------------------------------------------- '
PRINT 'NO UPDATES';
EXEC spProduct_Update 'PART1', null, null;

PRINT '';
PRINT '---------------------------------------------------------- '
PRINT 'UPDATE DESCRIPTION ONLY';
EXEC spProduct_Update 'PART1', 'PART 1 *** UPDATE DESCRIPTION ***', null;

PRINT '';
PRINT '---------------------------------------------------------- '
PRINT 'UPDATE COMMENTS ONLY';
EXEC spProduct_Update 'PART1', null, 'PART 1 *** UPDATE COMMENT ***';

PRINT '';
PRINT '---------------------------------------------------------- '
PRINT 'UPDATE DESCRIPTION & COMMENTS';
EXEC spProduct_Update 'PART1', '*** UPDATE BOTH ***', '*** UPDATE BOTH ***';

-- DELETE DEMO PROCEDURE
DROP PROCEDURE [dbo].[spProduct_Update];

GO

-- DELETE DEMO TABLE
DROP TABLE [dbo].[Product];

GO

In a nutshell, we can only use COALESCE with typed nullable values… this means COALESCE(null,null,null) will fail because null is not a defined type, but COALESCE(@nullvalue, @nullvalue, @nullvalue) will work since we had to declare a type of @nullvalue [e.g. DECLARE @nullvalue AS nvarchar(20)]

INSERT SAMPLE DATA
---------------------------------------------------------- 
(1 row(s) affected)
 
FAIL = COALESCE(null,null)
Msg 4127, Level 16, State 1, Line 3
At least one of the arguments to COALESCE must be a typed NULL.
 
 
PASS = COALESCE(null,<typed null parameter>)
 
---------------------------------------------------------- 
NO UPDATES

(1 row(s) affected)
BEFORE: PART1  |  PART 1 Description    |  PART 1 Comment      
 AFTER: PART1  |  PART 1 Description    |  PART 1 Comment      
 
---------------------------------------------------------- 
UPDATE DESCRIPTION ONLY

(1 row(s) affected)
BEFORE: PART1  |  PART 1 Description    |  PART 1 Comment      
 AFTER: PART1  |  PART 1 *** UPDATE DE  |  PART 1 Comment      
 
---------------------------------------------------------- 
UPDATE COMMENTS ONLY

(1 row(s) affected)
BEFORE: PART1  |  PART 1 *** UPDATE DE  |  PART 1 Comment      
 AFTER: PART1  |  PART 1 *** UPDATE DE  |  PART 1 *** UPDATE CO
 
---------------------------------------------------------- 
UPDATE DESCRIPTION & COMMENTS

(1 row(s) affected)
BEFORE: PART1  |  PART 1 *** UPDATE DE  |  PART 1 *** UPDATE CO
 AFTER: PART1  |  *** UPDATE BOTH ***   |  *** UPDATE BOTH *** 

Fishbowl Inventory C# .NET Intergration

We’re using a program at work called Fishbowl Inventory for managing our sample inventory.  The program is a full blown ISRP (Inventory, Shipping, Receiving, and Packing) solution that does everything from sales order entry to warehouse logistics.  It’s been pretty rock-solid (tho I do regular full backups/restores and database re-indexes).  As of today, we are still using version 4.7 (very old) because the product road map changed after version 5.0 ( I’m not sure who drove their road map, but I feel the went the wrong way on a bunch of changes and every change was followed with huge bug lists).  In a nutshell, Fishbowl is a Java application that uses a Firebird database to store it’s data.  Upon the release of version 4.x, Fishbowl released a new “integration” feature that allowed developers to write applications that could communicate with Fishbowl via XML.  This was a big “improvement”, since previously I had written a few applications directly against the DB that generates labels (another missing feature) with a Dymo thermal printer and ran into lots of problems because Fishbowl does not like DB SCHEMA changes (even very simple benign changes gave me and support many issues).

Fast forward 4 years and here I am, I need to get some data out of Fishbowl but I find the ODBC drivers very unstable and I get errors > 50% of the time when I try to pull data out of Fishbowl.  I also tried their C# SDK and also received a bunch of random errors (probably because it’s for version 2011.7 vs. 4.7).  Since I needed to get some data out, I did a quick google search and found somebody made a PHP wrapper called  fishconnect based on the SDK for version 2010.4.  It says it required Fishbowl Inventory 2010.4+ to work, but after reviewing the code (don’t have PHP server, so I didn’t test anything), I’m convinced that it will probably work for all previous versions back to v4.7.  After a few minutes reading the code, I figured I could make myself a nice C# wrapper to use on our ASP.NET intranet site.

Here is a quick summary of what I used to put this together….

 

1. Have XSD file, now generate me some objects!

I grabbed the “fbimsg.xsd” from fishbowlconnect PHP project, which is also provided with the official Fishbowl Java SDK.

Microsoft has had the “XSD.EXE” for a long time, it will generate your objects based on a XSD file.  The command was moved into the Visual Studio Command Prompt with VS2010.  To use the XSD tool, go to your Microsoft Visual Studio 2010 folder and launch “Visual Studio Command Prompt (2010)”.  Once your at a command prompt, type the following.

xsd <location of XSD>.xsd /s

Presto, you got a nice DAL that is strongly typed!!!

 

2. Communications with Fishbowl Requires a Big Endian Stream

Why MS only supports Little Endian streams out of the box is a mystery to me, maybe a push to use the MS stack?  I don’t know, but regardless you have to solve this problem to talk with Fishbowl.  I did a quick and dirty sample to get it working, but it was a terrible “permanent” solution and required “Allow unsafe code” to be enabled.  I ran to StackOverflow to find somebody smarter with better code and ended up grabbing what I needed from Jon Skeet’s MiscUtil Library.  With this, I now had full support for BigEndian data streams.

 

3. Approving Fishbowl Integrated Applications

Fishbowl Inventory requires you to approve all 3rd party applications in the Fishbowl Server GUI, before the become active.  If you are running Fishbowl as a service you’ll have to stop it and then run it interactively. Once started, go to “Integrated Applications” and you should see Fishbowl Connect C# listed, select and check the green check mark to approve.

Fishbowl Inventroy Integration - Approve

Fishbowl Server Integrated Applications - Approval Screen

 

4. Putting it all together and a working example.

The final product consists of a single class called “FishbowlSession” that implements IDisposable.  This class handles all the communications with fishbowl. In addition to this class we have a DAL that was created from our XSD file and a Utilities library to provide a few helpers.  I wrote the main class by writing NUnit tests, if you want to test against your Fishbowl Server then change values in [Setup] test to reference valid data for your server (e.g. Server IP, Login, Password, Part Numbers).

If you find bugs please let me know, I only tested sending/receiving 4 message types so I’m not sure if all message types will work.

 

Example showing how to get the inventory for a part in Fishbowl.

        private static string GetFishbowlPartInventory(string part)
        {
            string inventoryResults = "Fishbowl Inventory Offline!";

            using (FishbowlSession fishbowlInventory = new FishbowlSession("192.168.168.168"))
            {
                // Connect to fishbowl using a valid login/password
                fishbowlInventory.Connect("app","app");

                // Make sure we are authenticated before we try to send/receive messages
                if (fishbowlInventory.IsAuthenticated)
                {
                    // Build PartTagQueryRqType Request
                    PartTagQueryRqType partTagQueryRqType = new PartTagQueryRqType { LocationGroup = "Product", PartNum = part };

                    // Submit Request and get Response
                    PartTagQueryRsType partTagQueryRsType = fishbowlInventory.IssueRequest<PartTagQueryRsType>(partTagQueryRqType);

                    // Unregister Part, show custom message
                    if (partTagQueryRsType.statusCode != "1000")
                    {
                        inventoryResults = Utilities.StatusCodeMessage(partTagQueryRsType.statusCode);
                    }

                    // Tag object is optional, if there is no tag element you can not get the quantity
                    if (partTagQueryRsType.Tag != null && partTagQueryRsType.Tag.Length > 0)
                    {
                        Tag tag = partTagQueryRsType.Tag[0];

                        inventoryResults = int.Parse(tag.Quantity) > 0
                                               ? tag.Quantity + " Available in Fishbowl Inventory."
                                               : "No Inventory Available in Fishbowl Inventory.";
                    }
                }
            }

            return inventoryResults;
        }

IMPORTANT NOTE: There is an important caveat to this solution… Communication with the server will consume one of your seats that you have available based on your license key. This means, if your key only allows for 5 concurrent users and your app tries to connect when all the seats are full… it will fail! I have put a lot of other “things” in place to prevent this in my setup; short session timeouts, terminal server user caps, etc… but it’s still possible…

Download Fishbowl Inventory C# Wrapper

jQuery Validation – Show & Focus on first error only!

Have you ever wanted to show one error message at a time, or create a different type of visual queue for a specific invalid element on a form? Using the validate options, you can easily create any effect you want.

The example below, is designed to show the first error message in a form and to set focus on that first element. The message is displayed in a basic JavaScript alert box, not very elegant but it makes for a simple demo.

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
    <title>Show JS Alert - First Validation Error</title>
    <script src="Scripts/jquery-1.4.2.js" type="text/javascript"></script>
    <script src="Scripts/jquery.validate.js" type="text/javascript"></script>
    <script type="text/javascript">
        $(document).ready(function() {
            $("#elForm").validate({
                onfocusout: false,
                invalidHandler: function(form, validator) {
                    var errors = validator.numberOfInvalids();
                    if (errors) {
                        alert(validator.errorList[0].message);
                        validator.errorList[0].element.focus();
                    }
                },
                rules: {
                    TextBox1: { required: true, minlength: 2 },
                    TextBox2: { required: true, minlength: 2 },
                    TextBox3: { required: true, minlength: 2 }
                },
                messages: {
                    TextBox1: { required: "TextBox1 Required", minlength: "TextBox1 MinLength" },
                    TextBox2: { required: "TextBox2 Required", minlength: "TextBox2 MinLength" },
                    TextBox3: { required: "TextBox3 Required", minlength: "TextBox3 MinLength" }
                },
                errorPlacement: function(error, element) {
                    // Override error placement to not show error messages beside elements //
                }
            });
        });
    </script>

    <style type="text/css">
        label { width: 90px; display: block; float: left; }
        ul { list-style: none; }
        ul li { line-height: 1.8; }
    </style>
</head>
<body>
    <form id="elForm" action="#">
    <div>
        <h1>jQuery Validation - Show First Error Only</h1>
        <ul>
            <li><label>Text Box 1</label><input type="text" name="TextBox1" id="TextBox1" value="" /></li>
            <li><label>Text Box 2</label><input type="text" name="TextBox2" id="TextBox2" value="" /></li>
            <li><label>Text Box 3</label><input type="text" name="TextBox3" id="TextBox3" value="" /></li>
        </ul>
        <input type="submit" id="submit" value="Validate" />
    </div>
    </form>
</body>
</html>

Here are the important parts to focus on.

1. invalidHandler, this option give you access to the current validator and all the errors messages/items via the “validator.errorlist”.

                invalidHandler: function(form, validator) {
                    var errors = validator.numberOfInvalids();
                    if (errors) {
                        alert(validator.errorList[0].message);  //Only show first invalid rule message!!!
                        validator.errorList[0].element.focus(); //Set Focus
                    }
                }

2. errorPlacement, this function controls how error messages are displayed. Since we don’t want any error messages displayed by default, we can override the default errorPlacement with a empty method call.

                errorPlacement: function(error, element) {
                    // Override error placement to not show error messages beside elements //
                }

The end result is a alert box showing the first broken rule for the first invalid element on the page. In the example each input has 2 rules (required and minLength), if you enter a single character into the first text box the message will change from “TextBox1 Required” to “TextBox1 MinLength”.

Download JSON to CSV using JavaScript

Here is a simple way to convert your JSON to CSV, if your using FF or Chrome that supports data URI’s then this function convert your data and tell your browser to download the results as a text file. If you are using Internet Explorer (“IE”), a new window will popup so you can copy & paste or to use File -> Save As to download the results as text. The JSON parsing is done with json.org’s open source JSON parser, which you can download here.

Why can’t IE download the file to CSV like FireFox? The long answer is due to IE’s data URI support that was missing until IE8, and when implemented in IE8+, the security restrictions are set to prevent solutions like this from working! To force a download of your JSON data in CSV via IE, you’ll need to use an alternative technique like Server Side Parsing.

		function DownloadJSON2CSV(objArray)
		{
			var array = typeof objArray != 'object' ? JSON.parse(objArray) : objArray;
			var str = '';
			
			for (var i = 0; i < array.length; i++) {
				var line = '';
				for (var index in array[i]) {
					if(line != '') line += ','
				
					line += array[i][index];
				}
 
				str += line + '\r\n';
			}

			if (navigator.appName != 'Microsoft Internet Explorer')
			{
				window.open('data:text/csv;charset=utf-8,' + escape(str));
			}
			else
			{
				var popup = window.open('','csv','');
				popup.document.body.innerHTML = '<pre>' + str + '</pre>';
			}			
		}

WordPress 3.0 – A CMS Solution?

I started working on a project site that is going to use WordPress as the CMS engine. I wasn’t sure if this was going to work, but after a little time I found this was pretty easy to do. You start by deleting all the default content and disabling the default plug-ins and management screen options. Once everything is removed/disabled, you start by creating pages (e.g. Home, About, News, etc…) that will display all the content you need on the website. One you have your main page setup (e.g. Home), you need to change the default home page from a dynamic page of blog posts to a static page. You change the default home page under Settings –> Reading –> Front Page Display –> A Static Page –> Choose an option from the Front Page drop down list. That’s it, you now have WordPress running as a basic CMS soltuion.

I wish I could say that’s it, but most likely you’ll need to download and install some additional plug-ins. I found many plug-ins still do not support WordPress 3.0, so you’ll need to be careful! If you happen to install a bad plug-in, there is an easy fix to get you site back online… Delete the plug-in folder (via FTP or Hosting Control Panel) and WordPress will automatically disable the plug-in.

Here is the list of plug-ins I added for my project:

  1. Contact Form 7
    This is a great plug-in, it provides a easy way to create forms that get emailed to you. You create the forms using HTML online and a shortcode style syntax for the input fields. The shortcode inputs are then accessible in your email, that you also create using HTML.
  2. WP Google Maps
    This is a very basic Google Maps plug-in, the is about 10 options and the only think extra you need is a GoogleMaps API key.
  3. WordPress.com Stats
    I’ve been using this on my blog and love the UI. I wish this was all being done client-side, but for some reason it was created as service. By default it injects a smiley face into your page, so make sure you disable this in your themes style sheet.

The only other thing I added was a SSL plug-in that I created based on 3 other plug-ins. I created my own, so I could force WP to load specific pages as HTTPS and then default back to HTTP if it was not explicitly set to support SSL. I found most of the default plug-ins will leave the user browsing in SSL after they visit a SSL page, I didn’t want this and my plug-ins solves this problem.

<?php
/*
Plugin Name: WPSSL Force SSL
Plugin URI: 
Description: This plug-in is based on WPSSL, forcessl and various other posts/comments I found while searching for a soltuion.  The plug-in is used by adding the meta tag "force_ssl" with any value to any pages where you want SSL ("HTTPS") enabled.  If the page does NOT have this set and your not looking at an admin page (you can enable SSL for admin/login page in wpconfig.php), then display as HTTP.  This prevent links on the SSL page from displaying as SSL for non-secure pages.  This plug-in was tested on WordPress 3.0.
Author: Zachary Hunter (based on Austin Web Development)
Version: 1.0
Author URI: 


*/

function wpssl_forcessl() 
{
	global $post;
	
    $post_id = $post;
   
    if (is_object($post_id)) 
    {
		$post_id = $post_id->ID;
	}
	
    $force_ssl  = get_post_meta($post_id, 'force_ssl');

    if(!empty($force_ssl))
    {
    	if(!stristr($_SERVER['REQUEST_URI'], 'wp-admin')) {
			if($_SERVER["HTTPS"] != "on") {
				$newurl = "https://" . $_SERVER["SERVER_NAME"] . $_SERVER["REQUEST_URI"];
            	header("Location: $newurl");
            	exit();
        	}
     	}
    } else {
		if(!stristr($_SERVER['REQUEST_URI'], 'wp-admin')) {
			if($_SERVER["HTTPS"] == "on") {
				$newurl = "http://" . $_SERVER["SERVER_NAME"] . $_SERVER["REQUEST_URI"];
				header("Location: $newurl");
				exit();
			}
	}	}
}

add_action('wp', 'wpssl_forcessl');
?>

To use the plug-in follow these steps.

1. Copy the code above into a file in your plugin folder (e.g. /wp-content/plugins/wpssl/wpssl.php).
2. Go to the Plugins -> Plugins and enable “WPSSL Force SSL”.
2. Go to the Page you want to force SSL and open the “Custom Fields” section.
3. Click “Add Custom Field”, and use the settings below (name = force_ssl, value = true).
** DONE **

Improved NPOI ExportToExcel Function

I was looking over my generic export DataTable to Excel function the other day and noticed an issue. My default method was throwing everything into a string format, which was preventing users from highlighting number columns for subtotals. To fix the problem they could use “Text to Columns”, but this was becoming a common complaint I wanted to resolve. I came up with an improved “ExportToExcel” function that will insert the correct data type.

        /// <summary>
        ///   Render DataTable to Excel File
        /// </summary>
        /// <param name = "sourceTable">Source DataTable</param>
        /// <param name = "fileName">Destination File Name</param>
        public static void ExportToExcel(DataTable sourceTable, string fileName)
        {
            HSSFWorkbook workbook = new HSSFWorkbook();
            Sheet sheet = workbook.CreateSheet("Sheet1");
            Row headerRow = sheet.CreateRow(0);

            // Create Header Style
            CellStyle headerCellStyle = workbook.CreateCellStyle();
            headerCellStyle.FillForegroundColor = HSSFColor.GREY_25_PERCENT.index;
            headerCellStyle.FillPattern = FillPatternType.SOLID_FOREGROUND;

            // Create Date Style
            CellStyle dateCellStyle = workbook.CreateCellStyle();
            dateCellStyle.DataFormat = HSSFDataFormat.GetBuiltinFormat("m/d/yy");

            // Build Header
            foreach (DataColumn column in sourceTable.Columns)
            {
                Cell headerCell = headerRow.CreateCell(column.Ordinal);
                headerCell.SetCellValue(column.ColumnName);
                headerCell.CellStyle = headerCellStyle;
            }

            // Build Details (rows)
            int rowIndex = 1;
            int sheetIndex = 1;
            const int maxRows = 65536;

            foreach (DataRow row in sourceTable.Rows)
            {
                // Start new sheet max rows reached
                if (rowIndex % maxRows == 0)
                {
                    // Auto size columns on current sheet
                    for (int h = 0; h < headerRow.LastCellNum; h++)
                    {
                        sheet.AutoSizeColumn(h);
                    }

                    sheetIndex++;
                    sheet = workbook.CreateSheet("Sheet" + sheetIndex);
                    Row additionalHeaderRow = sheet.CreateRow(0);

                    for (int h = 0; h < headerRow.LastCellNum; h++)
                    {
                        Cell additionalHeaderColumn = additionalHeaderRow.CreateCell(h);
                        additionalHeaderColumn.CellStyle = headerRow.GetCell(h).CellStyle;
                        additionalHeaderColumn.SetCellValue(headerRow.GetCell(h).RichStringCellValue);
                    }

                    rowIndex = 1;
                }

                // Create new row in sheet
                Row dataRow = sheet.CreateRow(rowIndex);

                foreach (DataColumn column in sourceTable.Columns)
                {
                    Cell dataCell = dataRow.CreateCell(column.Ordinal);

                    switch (column.DataType.FullName)
                    {
                        case "System.String":
                            dataCell.SetCellValue(row[column].ToString());
                            break;
                        case "System.Int":
                        case "System.Int32":
                        case "System.Int64":
                        case "System.Double":
                        case "System.Decimal":
                            double val;
                            dataCell.SetCellValue(Double.TryParse(row[column].ToString(), out val) ? val : 0);
                            break;
                        case "System.DateTime":
                            DateTime dt = new DateTime(1900, 01, 01);
                            DateTime.TryParse(row[column].ToString(), out dt);

                            dataCell.SetCellValue(dt);
                            dataCell.CellStyle = dateCellStyle;
                            break;
                        default:
                            dataCell.SetCellValue(row[column].ToString());
                            break;
                    }
                }

                rowIndex++;
            }

            for (int h = 0; h < headerRow.LastCellNum; h++)
            {
                sheet.AutoSizeColumn(h);
            }

            ExportToExcel(workbook, fileName);
        }

The key part of the function above to review is the “switch (column.DataType.FullName)” code block. This grabs the DataTable’s column data type to use in the SetCellValue() call.

Once the workbook is built, we call a overload of the same function that expects a NPOI workbook. This overload will send a XLS file back to the user via their web browser using the HttpResponse stream.

        /// <summary>
        /// Render Excel File to HttpResponse (Browser)
        /// </summary>
        /// <param name="workbook">NPOI Workbook</param>
        /// <param name="fileName">Destination File Name</param>
        public static void ExportToExcel(HSSFWorkbook workbook, string fileName)
        {
            using (MemoryStream memoryStream = new MemoryStream())
            {
                workbook.Write(memoryStream);
                memoryStream.Flush();

                try
                {
                    HttpResponse response = HttpContext.Current.Response;
                    response.ClearContent();
                    response.ClearHeaders();
                    response.Buffer = true;
                    response.ContentType = "application/vnd.ms-excel";
                    response.AddHeader("Content-Length", memoryStream.Length.ToString());
                    response.AddHeader("Content-Disposition", string.Format("attachment;filename={0}", fileName));
                    response.BinaryWrite(memoryStream.GetBuffer());
                    response.Flush();
                    response.End();
                }
                catch
                {
                    // Do nothing, error expected due to Flush();
                }
            }
        }

I’ve used the above function to send files with 3-4 sheets full of data back to the browser… but the file size is usually “VERY LARGE”. To work around this, I have a helper function which will add the excel file to a zip (in memory) before sending back to the client.

In my last test, a 17.5MB document with 2 sheets of data was compressed to 3.5MB with my zip utility. There was still a ~13 second delay to generate the XLS file, but once prompted to download form the browser the file download completed in a few seconds.