eWorld.UI - Matt Hawley

Ramblings of Matt

Using Razor Pages with WebForms Master Pages

January 7, 2011 14:56 by matthaw

There's been some recent chatter about mixing ASP.NET MVC with WebForms, which is something I'm all too familiar with. This setup works just fine, but what if you wanted to use the new Razor view engine within ASP.NET MVC? A bit of backstory.

 

On a secondary project that I work on within Microsoft, we were given the ability to rewrite the entire UI of the application in ASP.NET MVC. At that time, the first beta of MVC3 was rolling out and we decided to take our first plunge and use the Razor view engine. Unfortunately, the common UI layout that is shared across multiple properties that we had to use was WebForms based. Our attempts to convince the developers of the common UI components to support more of a modularized infrastructure failed for any foreseeable future, so we were left with the conundrum of figuring out an intermediary solution.

 

After discussions with the ASP.NET team to see if using a Razor view where it's layout was a WebForms master page was or could be a supported scenario - the general consensus being "No". The primary reason is that WebForms and Razor pages have a completely different architecture and runtime execution model (renders from a control tree vs. mostly-single pass template, respectively). However, what ASP.NET MVC does allow is context switching between view engines based on partials that are rendered. With that in mind, I decided to see if a partial based implementation could be achieved.

 

The solution is fairly simple, and provides an easy upgrade path if and when you could ditch the WebForms master page. We'll start by creating a few extensions to the controller for rendering Razor based views. The reason we're doing this, is so that a WebForms based view can be rendered, while you think you're rendering a Razor based view.

 

   1:  public static ViewResult RazorView(this Controller controller)
   2:  {
   3:      return RazorView(controller, null, null);
   4:  }
   5:   
   6:  public static ViewResult RazorView(this Controller controller, object model)
   7:  {
   8:      return RazorView(controller, null, model);
   9:  }
  10:   
  11:  public static ViewResult RazorView(this Controller controller, string viewName)
  12:  {
  13:      return RazorView(controller, viewName, null);
  14:  }
  15:   
  16:  public static ViewResult RazorView(this Controller controller, string viewName, object model)
  17:  {
  18:      if (model != null)
  19:          controller.ViewData.Model = model;
  20:   
  21:      controller.ViewBag._ViewName = GetViewName(controller, viewName);
  22:   
  23:      return new ViewResult
  24:      {
  25:          ViewName = "RazorView",
  26:          ViewData = controller.ViewData,
  27:          TempData = controller.TempData
  28:      };
  29:  }
  30:   
  31:  static string GetViewName(Controller controller, string viewName)
  32:  {
  33:      return !string.IsNullOrEmpty(viewName)
  34:          ? viewName
  35:          : controller.RouteData.GetRequiredString("action");
  36:  }

 

The thing to note from above, is that we're actually rendering a view called "RazorView" specifying the actual view to be rendered on ViewBag._ViewName. This value will later be used to render the appropriate RazorView.

Side note: This is a naive implementation of GetViewName assuming that if you do not specify the view name, you're rendering a view with the same action name from the route data. This may not be correct in all places, but seems to work reasonably well.

You'll also need to create a "RazorView" WebForms view in the Shared directory which inherits from your master page. This view will ultimately render the Razor view as a partial defined by your action result. I've also created a Razor layout page that all Razor views will use to alleviate the issue of having to touch each view if and when a conversion to only the Razor view engine takes place. You'll find each of the view files below.

 

Shared/Site.Master:

   1:  <%@ Master Language="C#" Inherits="System.Web.Mvc.ViewMasterPage" %>
   2:  <!DOCTYPE html>
   3:  <html>
   4:  <head runat="server">
   5:      <title><asp:ContentPlaceHolder ID="TitleContent" runat="server" /></title>
   6:  </head>
   7:  <body>
   8:      <div>
   9:          <asp:ContentPlaceHolder ID="MainContent" runat="server" />
  10:      </div>
  11:  </body>
  12:  </html>

Shared/_SiteLayout.cshtml:

   1:  @RenderBody()

Shared/RazorView.aspx:

   1:  <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master"
   2:      Inherits="System.Web.Mvc.ViewPage<dynamic>" %>
   3:   
   4:  <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server">
   5:      <% Html.RenderPartial((string) ViewBag._ViewName); %>
   6:  </asp:Content>

Home/Index.cshtml:

   1:  @{
   2:      Layout = "~/Views/Shared/_SiteLayout.cshtml";
   3:  }
   4:   
   5:  <h2>This is the index page</h2>

 

What you'll notice, is that it's not possible to utilize Razor content sections to render different parts from the WebForms master page. However, you are able to use Razor content sections from the Razor layout file since you're in that model. If you do have to render multiple WebForm content sections, you may need to make things a bit more elaborate by specifying specific content areas manually. Ultimately, I wouldn't recommend that, and just try to limit your output to a main content panel.

 

Within your controller, your action result is simply:

   1:  public ActionResult Index()
   2:  {
   3:      return this.RazorView();
   4:  }

 

While this solution isn't optimal, it does allow you to utilize Razor views while still adhering to a common WebForms master page. If possible, I recommend not doing this, but sometimes (like in our solution) you have to do what is necessary. If your master pages are simple enough (and primarily just layout), I fully recommend having a WebForms master page and a Razor layout page. Regardless, the way this has been setup, migration to a Razor only architecture is fairly trivial - just simply change all this.RazorView() calls to View() calls and copy over your layout to _SiteLayout.cshtml.

 

Click here to download the source for this solution.



Mercurial Conversion from Team Foundation Server

March 16, 2010 09:12 by matthaw
In addition to blogging, I'm also using Twitter. Follow me @matthawley

One of my many (almost) daily tasks when working on the CodePlex platform since releasing Mercurial as a supported version control system, is converting projects from Team Foundation Server (TFS) to Mercurial. I'm happy to say that of all the conversions I have done since mid-January, the success rate of migrating full source history is about 95%. To get to this success point, I have had to learn and refine several techniques utilizing a few different tools to get the job done. Before I jump into the meat of the post, there are several setup tasks that need to be done first.

 

Configuring Mercurial

Mercurial comes with a pre-packaged extension, convert, which supports nearly all major version control systems (Subversion, CVS, git, Darcs, Bazaar, Monotone, GNU Arch, and yes - even itself). Because this is an extension, you need to enable it in the Mercurial configuration. Open the Mercurial.ini file located at C:\Users\<UserName> and add the following lines

[extensions]
convert =

Once this is saved, you can test if this extension is working by typing the command hg help convert If things are configured correctly, it should display the help information regarding the convert extension.

 

Conversion Setup

Mercurial's convert extension allows you to have somewhat full control when performing the conversion. These options include mapping usernames, including/excluding certain paths, and renaming branches. These configuration options are to our advantage since TFS requires Active Directory and stores all commits in the format of DOMAIN\User (though this format is perfectly acceptable, it's not ideal). Utilizing the ability to map usernames, create a new text file named auths.txt and start adding your mappings:

CONTOSO\Joe = Joe
CONTOSO\Mark = Mark

It is only necessary to add username mappings that appear in the TFS source history. Later, you will see how this username mapping file is used.

 

Utilizing SvnBridge

The first approach I take for doing project conversion for CodePlex, which has a success rate of 85%, is using the hosted SvnBridge. From the command line type the following

hg convert --authors auth.txt https://xunitcontrib.svn.codeplex.com/svn xunitcontrib

If the conversion works, your project will be successfully converted to Mercurial. It is recommended that you view the log history to ensure everything is in order. Should anyone continue to check in sources, just re-run the conversion on the already-converted Mercurial repository, and it will convert anything new.

 

Utilizing A Suite of Tools

Should the hosted SvnBridge not work, or you have TFS hosted elsewhere, the process is not nearly as straight forward and requires the use of several tools. Download and install the following

  • tfs2svn - This converts TFS history into Subversion history
  • VisualSVN Server - Used as the intermediary Subversion repository store
  • TortoiseSVN - Used for storing the Subversion credentials as well as admin tasks

After you have installed all of these tools (and probably rebooted your machine), follow these steps.

 

Step 1: Create an empty repository in VisualSVN Server. This is where the history from TFS will be migrated to as an intermediary step. Make sure that you do not select Create default structure (trunk, branches, tags). If you accidently check this option your import via tfs2svn will fail because it is expecting an empty repository.

 

Step 2: On the newly created repository open the Manage Hooks dialog by Right Clicking Repository -> All Tasks -> Manage Hooks... Edit the Pre-revision property change hook by entering a single carriage return. This is necessary for enabling hook which allows tfs2svn to rewrite the history in Subversion. If the hook is enabled correctly, it will become bolded in the list of available hooks. Click OK to discard the dialog, and restart the VisualSVN server by Right Clicking VisualSVN Server -> Restart

 

Step 3: You now need to add a user account to VisualSVN server so that tfs2svn can authenticate and import the history. In VisualSVN, right click Users and select Create User. Type in a easy-to-remember username and password and click OK. How tfs2svn operates, is that it uses cached Subversion credentials for the import. The easiest way of caching your credentials, is checking out your new Subversion repository. When prompted for credentials, ensure the checkbox Save Authentication is checked.

 

Step 4: Launch tfs2svn and start filling in the connection information to your TFS and Subversion servers. Once the information is correctly filled out, click the Convert button and wait while it starts extracting the history from TFS and importing it into your Subversion repository.

 

image

Step 5: Once the tfs2svn process has been completed, you can view the history of the Subversion repository. One thing you'll notice, is that tfs2svn prefixes all commit messages with "[TFS Changeset #12345]". There are also some instances where tfs2svn will add "tfs2svn: " as a commit message as well. If you don't care if your Mercurial repository will have these messages, skip to step 6 - otherwise continue on.

 

image 

To remove the erroneous commit messages, you will need to rewrite the Subversion log using administrative tools. The process follows

  1. Get an XML log of the repository history
  2. For each logentry in the XML log get the msg content and remove the messages
  3. Create a temporary file writing the updated log message to it
  4. Call the setlog command on the executable svnadmin passing in the path to the temporary file in step 3.

As you can see, this is a tedious process - so I automated it with a Powershell script (downloadable here). The script assumes that your VisualSVN server can be accessed at https://localhost/svn/<RepositoryName> as well as the physical path of your VisualSVN repositories are located at C:\Repositories. Once you have updated the script, place it in %My Documents%\WindowsPowerShell, open a Powershell command prompt and type rewrite-log RepositoryName

Note: If you view the Subversion log again, you may notice that the message for each revision still contains the content we were trying to strip off. If this is the case, don't worry - the history has been rewritten, and you may be viewing a cached version of the repository.

 

Step 6: Now that you have your repository migrated to Subversion and you have rewritten the log , you can now convert it into a Mercurial repository. Using the same syntax as SvnBridge conversion earlier, type the following on you command line

hg convert --authors auth.txt https://localhost/svn/xunitcontrib xunitcontrib

image

 

Once completed, you will have a full history of your TFS repository converted to Mercurial! You can now start using your local repository immediately or push the history to a central repository for others to start using.

 

kick it on DotNetKicks.com

VisualHG: A Mercurial Plugin for Visual Studio

March 15, 2010 12:03 by matthaw
In addition to blogging, I'm also using Twitter. Follow me @matthawley

 

imageMercurial is quickly gaining momentum in the open source world, and the need for great tooling to make developers lives easier is always essential.  Most developers using Mercurial know of the the explorer shell plugin, TortoiseHg, but what many don't know about is VisualHG. In summary, VisualHG is

  • A source control plugin for Visual Studio (works with 2005, 2008 and 2010)
  • It sits on top TortoiseHg exposing common commands in Visual Studio
  • It tracks file status changes automatically indicating the state in Visual Studio
  • There's absolutely no SCC bindings!
  • It's absolutely free!

An occasional gripe we hear from CodePlex users, is that when they download and open a project from CodePlex that contains SCC bindings, they quickly get annoyed by the Visual Studio warnings of working temporarily uncontrolled. This is why the second to last bullet point is even listed, it's that important! The mere fact that it's free is just a bonus :)

 

To get started working with VisualHG is very simple.

  1. Download and install TortoiseHg and VisualHG.
  2. Open Visual Studio and go to Tools -> Options.
  3. In the options tree view, select Source Control. (You may need to click the Show all Settings checkbox)
  4. Select VisualHG from the drop down list, and click OK.
  5. Open your Mercurial based solution to see the plugin installed and determining your files' statuses.

So, why am I writing this post? Well, I wanted to highlight and recommend a great Mercurial based project hosted on CodePlex that we, the CodePlex team, use every day. Don't get me wrong, a few of us still use Mercurial from the command line (myself included), but I wouldn't even go without having VisualHG installed for simply tracking file status changes (necessary when doing lots of refactoring).

 

kick it on DotNetKicks.com



WikiPlex v1.3 Released

March 11, 2010 17:53 by matthaw
In addition to blogging, I'm also using Twitter. Follow me @matthawley

 

It's been a many months since the last release of WikiPlex, but its only because there hasn't been a lot of churn recently.  I've very happy where WikiPlex is at, and it continues to be a very integral part of the CodePlex website!

 

Here's what you'll find in WikiPlex v1.3:

  1. Documentation - This new documentation includes
    1. Full Markup Guide with Examples
    2. Articles on Extending WikiPlex
    3. API Documentation
  2. Video Macro - This macro was updated to support Channel9 Videos.
  3. Syntax Highlight Support - One more language has been included:
    1. {code:powershell} ... Your Powershell Code ... {code:powershell

This time I did what I promised two releases ago, provided some good documentation. I even went so far as creating another open-source project called WikiMaml which will take wiki syntax and convert it to Sandcastle MAML output. The project isn't full proof, and not where I want it to end up, but it is working great within WikiPlex to generate all of the non-API documentation. As always, if you have any ideas for new macros or extensibility points, please post them in the issue tracker and I'll make sure to look at them!

Now, go and download WikiPlex v1.3!



Extending WikiPlex with Scope Augmenters

March 10, 2010 10:40 by matthaw
In addition to blogging, I'm also using Twitter. Follow me @matthawley

 

This post is long overdue, but as I'm preparing the v1.3 release of WikiPlex and working on documentation (yes, I did say documentation) I realized that another extension point with WikiPlex was not covered. This last, important (but rarely used) part is extending with Scope Augmenters. Scope Augmenters allow you to post process the collection of scopes to further augment, or insert/remove, new scopes prior to being rendered. WikiPlex comes with 3 out-of-the-box Scope Augmenters that it uses for indentation, tables, and lists. For reference, I'll be explaining how and why the IndentationScopeAugmenter was created.

 

Why Its Needed

The IndentationMacro allows for an arbitrary indentation level indicated by the number of colons that are placed a the beginning of the line. Let's take a look at the primary macro rule:

new MacroRule(@"(^:+\s)[^\r\n]+((?:\r\n)?)$",
              new Dictionary<int, string>
              {
                 {1, ScopeName.IndentationBegin},
                 {2, ScopeName.IndentationEnd}
              });

As you can see, we're capturing any number of colons at the beginning, but our ending scope knows nothing of the how many defined levels there are. If you can imagine, knowing nothing about your the beginning scope when rendering to correctly render the ending is not a trivial task without context. This is the exact reason a Scope Augmenter is used, it has that context.

 

Ultimately, we would like the input

:: I am content

to be rendered as

<blockquote><blockquote>I am content</blockquote></blockquote>

 

Create a Scope Augmenter

Scope Augmenters can be as easy as you need to make it, but can also be fairly difficult - point of example, the supplied ListScopeAugmenter requires a complex algorithm to correctly identify levels, nested levels, and determining when to start new lists. When creating a Scope Augmenter, it will take in the associated macro, the current list of scopes, and the current content returning a new list of scopes to be rendered. In your solution, create a class called IndentationScopeAugmenter and extend it from WikIPlex.Parsing.IScopeAugmenter. You'll then implement the Augment method.

   1:  public IList<Scope> Augment(IMacro macro, IList<Scope> capturedScopes, string content)
   2:  {
   3:      var augmentedScopes = new List<Scope>(capturedScopes.Count);
   4:   
   5:      int insertAt = 0;
   6:      for (int i = 0; i < capturedScopes.Count; i = i + 2)
   7:      {
   8:          Scope begin = capturedScopes[i];
   9:          Scope end = capturedScopes[i + 1];
  10:   
  11:          string beginContent = content.Substring(begin.Index, begin.Length);
  12:          int depth = Utility.CountChars(':', beginContent);
  13:   
  14:          for (int j = 0; j < depth; j++)
  15:          {
  16:              augmentedScopes.Insert(insertAt, begin);
  17:              augmentedScopes.Add(end);
  18:          }
  19:   
  20:          insertAt = augmentedScopes.Count;
  21:      }
  22:   
  23:      return augmentedScopes;
  24:  }

The Indentation begin / end scopes always come in a sequential pair, which is why I'm able to grab the begin and end scope in lines 8 and 9. Next, you'll see that we need to determine the depth to indent, so we grab the beginning content (which ultimately will be a string containing only colons). In line 12, we count the number of colons there are, which gives us our depth count. Lines 14 - 18 are adding the N-1 listing of IndentationBegin and IndentationEnd scopes. The method then returns this, newly augmented, list of scopes. Basically the augmentation goes from

 

ScopeName.IndentationBegin,
ScopeName.IndentationEnd

to

ScopeName.IndentationBegin,
ScopeName.IndentationBegin,
ScopeName.IndentationEnd,
ScopeName.IndentationEnd

 

Registering a Scope Augmenter

Just as registering macros and renderers, there is (only) a static endpoint. Since augmenters should not rely on runtime dependencies, there is no runtime equivalent of using scope augmenters. When you register a Scope Augmenter, it is always associated with a single macro type, and during parsing, the WikiPlex parser will query for the Scope Augmenter that is associated with the current macro being used. To register your Scope Augmenter, have the following code in your application startup method

ScopeAugmenters.Register<IndentationMacro, IndentationScopeMacro>();

When you call the WikiEngine.Render("content"), it will automatically pick up all registered Scope Augmenters and use them during parsing.

 

Summary

You now have a fully functioning macro / augmenter / renderer that will take an arbitrary depth and have it render correctly. As previously stated, WikiPlex also ships two other Scope Augmenters, ListScopeAugmenter and TableScopeAugmenter, which have a bit more logic associated with them. While Scope Augmenters allow you to manipulate the list of scopes prior to rendering, they should only be used in certain situations in which you cannot capture the correct set of conditions or are unable to contextually determine rendering based on separate scopes.

 

Download WikiPlex now!



WikiPlex v1.2 Released

October 8, 2009 13:46 by matthaw

It's been a few months since the last release of WikiPlex, but I honestly have good reasons - paternity leave! This updated version has taken in a lot of user feedback and put it into action - so thank you for contributing ideas. Since the last release, there's been a steady download of the binaries/source code on a day-to-day basis, and am very happy where it is in the ecosystem.

 

Here's what you'll find in WikiPlex v1.2:

  1. Indentation Macro - This new macro adds support for blockquote indentation. Utilize it similar to the ordered/unordered list macros with the colon character. See this documentation for an example on how to use it.
  2. Silverlight Macro - This macro was updated to support:
    1. Any type of height/width unit (px, pc, pt, em, %, etc).
    2. Require Silverlight 3 as the default. You can optionally revert back to Silverlight 2 with the version=2 parameter.
    3. Support for initialization parameters by supplying any additional key/value parameters in the macro
  3. Video Macro - This macro was updated to support:
    1. A height/width supporting any type (px, pc, pt, em %, etc)
    2. Videos will not auto-start by default.
    3. Soapbox support has been removed
  4. Syntax Highlight Support - Two more languages have been included:
    1. {code:c++} ...Your C++ Code... {code:c++}
    2. {code:java} ...Your Java Code... {code:java}
  5. Updated Sample Application - A WebForms variant was added to the sample application. It can be found under the /WebForms directory.

Unfortunately, I didn't get around to documenting the API, but I've not heard of anyone commenting they can't figure it out - so maybe it'll still happen sometime in the future. As always, if you have any ideas for new macros or extensibility points, please post them in the issue tracker and I'll make sure to look at them!

 

Now, go and download WikiPlex v1.2! Also, don't forget to follow me on Twitter @matthawley

 

kick it on DotNetKicks.com



WikiPlex v1.1 Released

August 4, 2009 18:31 by matthaw

It's only been a few weeks since the v1.0 release, but a lot of work has gone into a new release of CodePlex's embeddable wiki engine, WikiPlex. The time in between has not been without a few highlights, including (detailed stats):

  • Over 500 combined downloads of the v1.0 engine and sample application
  • Roughly 175 downloads of the source code (not including SVN enlistments)
  • Over 8000 page views / 2200 visits of the project site
  • SoftPedia picking up WikiPlex (Download WikiPlex Free Open Source Wiki Engine from Microsoft)
  • WikiPlex being already integrated in several projects (SWiki / Umbraco)

I didn't have a long-term plan for revamping the wiki engine (I actually still don't!), but I did have several focuses for the short term that I wanted to achieve, including Mono compatibility, refactoring the macro parsing to enable more extensibility into the engine, and general "clean-up" tasks that I've been wanting to do. So, what's new in v1.1?

  1. Mono Compatibility - The WikiPlex source has been tested against the 2.4.2.1 release of Mono running on Linux. The source cleanly compiles and runs the sample application (note: you do still have to setup your own database for this). The only remaining issues running the sample application on Mono are ASP.NET MVC / Mono bugs.
  2. Scope Augmenters - Scope Augmenters allow changing the resulting scopes prior to rendering based on a macro mapping. Previously - the Table, Ordered List, and Unordered List scope augmentation were hard-coded into the MacroParser. With this release, you can now add your own augmenters to fully control the rendering of WikiPlex.
  3. Syndicated Feeds - The entire WCF syndication API was removed in lieu of utilizing a simpler, customized syndication framework. The main reasons for this change included: Mono currently not supporting this API and supporting the odd Google Atom specification. Aside from these internal changes, the macro was expanded so that it now supports {rss:url=...}, {feed:url=...}, and {atom:url=...} matching. No matter which format you use, the renderer will still choose the appropriate syndication reader (ie, you can specify rss for an atom feed, and vice versa).
  4. Sample Download - To be honest, I never opened the sample project from the .zip file, and so I never realized the state that it was in. Be it missing files, incorrect references, whatever - everything is fixed now! Along with that, several people have indicated that they didn't have SQL 2008 Express installed, so within the App_Data directory, there's a Wiki.sql file that you can execute on your local SQL server to create the sample tables/data (oh, and don't forget to change your connection string).
  5. Invalid Syntax Highlight Code Blocks - Previously, if someone supplied a {code:xxx} block that didn't match any of the supported languages, their source code would not be formatted as code. In v1.1 this has been changed, as it'll fall back to the non-syntax highlighted display of the code if it cannot find the language.
  6. Namespace Cleanup - This should only affect advanced users of the wiki engine. Below is a list of the changes:
    1. ScopeName was moved from WikiPlex.Common to WikiPlex
    2. IXmlDocumentReader and XmlDocumentReader were moved from WikiPlex.Common to WikiPlex.Syndication

One item that is still on my plate, is actually spending some time documenting the API and producing a help file. Hopefully, the API isn't too difficult to understand, but I realize that it's somewhat necessary when it comes to implementing the engine within your application. Regardless, I feel that the engine is pretty close to where it needs to be regarding usability and extensibility. If you have any ideas for new macros or extensibility points, please post them in the issue tracker and I'll make sure to look at them!

 

With that said, I give you WikiPlex v1.1 (go and download it now!) Also, follow me on Twitter @matthawley

 

kick it on DotNetKicks.com



Extending WikiPlex with Custom Renderers

July 30, 2009 13:22 by matthaw

Following up from my prior post on Extending WikiPlex with Custom Macros it's now time to talk about creating custom renderers. When we left off, we had created our title link macro and registered it with WikiPlex. If you had attempted to utilize the new macro in wiki content, you'd get the message "Cannot resolve macro, as no renderers were found." instead of a hyperlink. This was the expected behavior, because while you had identified your new scope via parsing, you were missing that critical link of turning the scope into actual content. Let's take a look at our scenario again to refresh what we're trying to achieve:

 

We would like to integrate WikiPlex into an existing application. The idea is to allow a user contributed area specifically for wiki content. The user should be allowed to use all out-of-the-box macros provided, but also have the ability to have inter-wiki links with the format of [Title of Page]. As you probably realized, there is currently no macro/renderer that will take that content and turn it into a inter-wiki link, so we'll have to extend WikiPlex adding this functionality.

Create a Renderer

Creating a renderer is actually the easiest portion of defining new wiki syntaxes, as it's as complicated as you need to make it. Again, a renderer simply takes in a scope (which is a contextual identifier), processes the content, and returns new content. Let's get started - so in your solution, create a class called TitleLinkRenderer and extend it from WikiPlex.Formatting.IRenderer. You'll then implement the members it requires (Id, CanExpand and Expand). The Id value is simply a string that is used as a key for static renderer registration, so it should be unique (follow the same rule of thumb for naming as the macros).

 

Next, you'll implement the CanExpand method. This method simply takes in a scope name and returns a boolean value indicating if this renderer can expand (or render) the scope successfully. As the formatter is processing all scopes, it goes through the list of renderers in the formatter and finds the first match that can expand that particular scope. There is no guarantee of the order of checking renderers, so always unregister a renderer you're overriding its implementation for. As you'll see below, the CanExpand method is fairly trivial, however should your renderer support a number of scopes, you'll need to change this code to include all of them.

public bool CanExpand(string scopeName)
{
   return scopeName == WikiScopeName.WikiLink;
}

Next, you'll implement the Expand method. This method will take in a scope name, the related input from the wiki source, and html / attribute encoding functions. The reason we're passing in html / attribute encoding functions, is so that you can utilize a consistent encoding scheme across all of the renderers. Out of the box, WikiPlex uses HttpUtility.HtmlEncode and HttpUtility.HtmlAttributeEncode, but by creating & supplying your own formatter, you can change these to use another library (like AntiXss). As previously stated, rendering is as hard as you need it to be. In the sample application example, we're just rendering a link utilizing the ASP.NET MVC UrlHelper (which is supplied via the constructor).

private const string LinkFormat = "<a href=\"{0}\">{1}</a>";

public string Expand(string scopeName, string input,
                     Func<string, string> htmlEncode, 
                     Func<string, string> attributeEncode)
{
   string url = urlHelper.RouteUrl("Default", new { slug = SlugHelper.Generate(input) });
   return string.Format(LinkFormat, attributeEncode(url), htmlEncode(input));
}

And now you have created your renderer, however it will still not be picked up when rendering your wiki content as you need to register the renderer with WikiPlex.

 

Registering a Renderer

Just as registering a macro, you have a static and a dynamic way to register your renderers. If your renderer requires only static dependencies (or no external runtime dependencies), you should opt for statically registering your renderer. To do this, have the following code in your application startup method

Renderers.Register<TitleLinkRenderer>();

When you call the WikiEngine.Render("content"), it will automatically pick up all statically defined renderers and use them when formatting your scopes. As previously stated, if you have external runtime dependencies (like in our example), a little bit of extra work is required when calling WikiEngine.Render - as you'll need to pass in a MacroFormatter instead. However, if you utilize the overload to only take in a formatter, you'll need to union the statically defined renderers with yours.

private MacroFormatter GetFormatter()
{
   var siteRenderers = new IRenderer[] {new TitleLinkRenderer(Url)};
   IEnumerable<IRenderer> allRenderers = Renderers.All.Union(siteRenderers);
   return new MacroFormatter(allRenderers);
}

Now, when you call WikiEngine.Render, you'll utilize the overload that takes in an IFormatter as a parameter. After you've set all of this up, try re-loading your page and you should see your syntax of [Title Link] be converted into the html <a href="/title-link">Title Link</a>.

 

Summary

You now have a new fully functioning macro syntax. Obviously, this example is trivial - but I guarantee if you embed WikiPlex into your application and need any cross-page linking, you'll utilize this macro & renderer. Again, the possibilities are endless with what you can do, so long as you have a syntax, regex, and rendering code - you can allow your users to simply include expansive macros.

 

Download WikiPlex now!

 

kick it on DotNetKicks.com



WikiPlex – An Embedded Wiki Engine

July 16, 2009 11:04 by matthaw

What and Why?

I'd like to introduce you to WikiPlex, which is CodePlex's wiki engine that we have re-written and made open source under the MS-PL license. I'm also happy to announce that our first public release is now available!

 

CodePlex previously had a decent wiki engine that was written eon's ago. On the average, that wiki engine worked relatively well, but had a very problematic performance bug that would cause rendering slowness occasionally. So, instead of attempting to fix the bug, we decided to re-write the entire thing with the intensions of making it available to everyone! This time, we chose a different approach for parsing the wiki markup (utilizing regular expressions) which has proven to give us a performance boost as well as a relatively simpler architecture!

 

The main question you may be asking yourself is - Why use WikiPlex over a different solution? Here's the simple answer: WikiPlex is great if you already have a .NET application you'd like to embed a wiki interface into. Be it as simple as allowing users to host their own homepage content, item descriptions, or comments - the possibilities are endless!

 

Usage

WikiPlex was built in a way that it can easily be added into your infrastructure. Whether your using dependency injection or not, the code is as simple as the following:

var engine = new WikiPlex.WikiEngine();
string output = engine.Render("This is my wiki source!");

If you take a look at the overloads for Render, you'll see that you have a lot of flexibility as far as rendering various wiki segments differently at runtime. (I'll describe the extensibility in the future)

public interface IWikiEngine
{
   string Render(string wikiContent);
   string Render(string wikiContent, IFormatter formatter);
   string Render(string wikiContent, IEnumerable<IMacro> macros);
   string Render(string wikiContent, IEnumerable<IMacro> macros, IFormatter formatter);
}

Supported Macros

The following are the macros supported out of the box for WikiPlex. If you'd like to see the description and usage, please visit the markup guide.

  • Text Formatting
    • Bold
    • Italics
    • Underline
    • Strikethrough
    • Superscript
    • Subscript
  • Headings
  • Images
  • Links
  • Tables
  • Left and Right Aligned Text
  • Ordered and Unordered Lists
  • RSS / Atom Feeds
  • Source Code Blocks (both syntax highlighted and not)
  • Silverlight
  • Videos (Flash, Quicktime, Real, Soapbox, Windows Media, and YouTube)

Enjoy! And don't forget to download WikiPlex now!

 

kick it on DotNetKicks.com



MEF + Factories Using an Export Provider

November 29, 2008 10:25 by matthaw

After my last post about MEF + Factories, I was chatting with Glenn Block (PM for MEF) about my approach. One of the first things he mentioned is why I hadn't used an ExportProvider. As you know, ExportProvider's are new in the latest drop, and provide an interception point for resolving an import to an export.

 

Taking a look back at the documentation on ComposablePart, I can see why Glenn mentioned this. In it, it states that a ComposoblePart must adhere to to Import/Export contracts. If you've looked at the prior example, you'll see that I'm explicitly violating that rule as I'm finding interfaces based on a base interface, not by Export! At this point, my mind started churning - mainly because there's not a lot of examples or descriptions of what an ExportProvider actually is - but because I wanted to do things "correctly" according to the framework provided. (As an aside, Glenn stated that a "What is an Export Provider" post or documentation is coming, maybe this'll boost that necessity!) What I ultimately came up with was a much cleaner solution than using a FactoryPartCatalog.

 

Introducing FactoryExportProvider:

   1:  public class FactoryExportProvider<T> : ExportProvider {
   2:     public FactoryExportProvider(Func<Type, T> resolutionMethod) { }
   3:     public FactoryExportProvider(Assembly assembly, Func<Type, T> resolutionMethod) { }
   4:     public FactoryExportProvider(IEnumerable<Type> types, Func<Type, T> resolutionMethod { }
   5:  }

What you'll see is again, it's very straight forward and provides a clean implementation in usage, just as the part catalog example did. Each constructor does just as it had done in the part catalog, so I'll not explain that again. One thing that I discussed with Glenn about was, is it appropriate to look for certain types within an Export Provider? His response was "absolutely you can do that". good, I think I've found the correct implementation, both from less code/object graph standpoint, but also from an intention standpoint.

 

Internally, the code is very simplistic. Since I'm finding these interfaces on-the-fly, and need more information than just the contract name, I needed to use a FactoryExportDefinition to store this information. I've you've looked at the prior example, you'll see this came back out of necessity.

   1:  public class FactoryExportDefinition<T> : ExportDefinition {
   2:     public FactoryExportDefinition(string contractName, Type type, Func<Type, T> resolutionMethod) { }
   3:   
   4:     public override string ContractName { get { ... } }
   5:     public Type ServiceType { get; private set; }
   6:     public Func<Type, T> ResolutionMethod { get; private set; }
   7:  }

When finding all of the interfaces that implement the base interface specified in the FactoryExportProvider, I convert those into a list of FactoryExportDefinition objects. Reason being, is that the export provider compares an ImportDefinition to an ExportDefinition when finding all available exports. This comparison is done by implementing the GetExportsCore method. The idea of export providers, is that when resolving all dependencies, MEF will call into all of the registered ExportProviders to determine if they can supply the Export and will do a bunch of cardinality matching for you. Out of the box, MEF provides an export provider for it's registered part catalogs. Here's the FactoryExportProvider's implementation of GetExportsCore.

   1:  protected override IEnumerable<Export> GetExportsCore(ImportDefinition importDefinition) {
   2:     IList<Export> exports = new List<Export>();
   3:     var constraint = importDefinition.Constraint.Compile();
   4:     var foundExports = from d in definitions
   5:                        where constraint(d)
   6:                        select new Export(d, () => d.ResolutionMethod(d.ServiceType));
   7:   
   8:     if (importDefinition.Cardinality == ImportCardinality.ZeroOrMore)
   9:        exports = foundExports.ToList();
  10:     else if (foundExports.Count() == 1)
  11:        exports.Add(foundExports.First());
  12:   
  13:     return exports;
  14:  }

It's that simple. The Export's that are returned will have the resolution method called when the actual object is needed. When it comes down to including this within your application, it's just as easy as it was for the part catalog, you just register things a bit differently.

   1:  public interface IService { }
   2:  public interface IUserService : IService { }
   3:   
   4:  [Export]
   5:  public class UserController {
   6:     [ImportingConstructor]
   7:     public UserController(IUserService userService) { }
   8:  }
   9:   
  10:  // in your application
  11:  private void Compose() {
  12:     var catalog = new AttributedAssemblyPartCatalog(Assembly.GetExecutingAssembly());
  13:     var factoryProvider = new FactoryExportProvider<IService>(GetService);
  14:     var container = new CompositionContainer(catalog, factoryProvider);
  15:     container.AddPart(this);
  16:     container.Compose();
  17:  }
  18:   
  19:  public IService GetService(Type type) { return ... }

And that's it. Ultimately, this leads to a cleaner implementation that uses less types that you have to manage, and, adheres to the correct intentions of the framework. Much thanks to Glenn who I chatted with for several hours last night! You can get the downloadable source here. Enjoy!

 

kick it on DotNetKicks.com



Categories: .NET | MEF | Programming
Actions: E-mail | Permalink | Comments (6) | Comment RSSRSS comment feed


Copyright © 2000 - 2024 , Excentrics World