Give Scott Guthrie's post a read : About Technical Debates (and ASP.NET Web Forms and ASP.NET MVC debates in particular)
Brilliant!
We are currently having debates over technologies in our company. The general consensus by “the powers” are that the technologies involved is to “new” and unfamiliar to majority of people implementing them, and thus some projects have burned.
My opinion is that it is less about the new technologies, more about the people that are implementing them (“Bad developers using great tools/frameworks can make bad apps”). A huge topic of debate is around the “new” ORM thing (nHibernate) that we are using - the older generation of intermediate/junior developers do not understand how to use it, regardless of the fact that nHibernate has been around longer that the people that are complaining about it, has been in industry!
I suppose this is an age long debate. From business point of view, do what we have been doing fairly successfully in the technologies we have been using and is well understood. But the risk associated with this is :
1) Losing the A grade developers who are more interested in leveraging technology to solve their problems than needlessly code around problems using the same method over and over again.(ie. getting stuck in a rut)
2) From a technology point of view fall behind industry, start losing the edge, and potentially future productivity enhancements, that could cut costs and time.
Sunday, January 24, 2010
Tuesday, January 19, 2010
Data Ownership
I'm sure that every company has some sort of data being pushed/pulled from one system to another.
One of the biggest mistakes in the integration world, in my mind, is not to take the ownership of data into account when data is being passed from one system to another.
A classic example:
At a client I'm working for we have an application that is being fed information about employees. We manage and enrich certain aspects of the data surrounding the employee to perform certain tasks. Currently we are re-looking an integration strategy and doing some investigation surrounding the flow of information. I find them talking about a certain piece of information, let’s call it X.
Q: Do we use X anywhere
A: No.
Q: Why do we have X?
A: We need to send it to a downstream system.
Q: Do we need to pass it on?
A: Yes: Cause we don't know who they send it to.
So why do the downstream system not get it from the source or the master? Why do we need to be the source of data that we are not responsible for?
The end result of not defining clear owners of data is that you end up with very brittle point to point integrations that can get out of hand and unmanageable very quickly. There should be only one master and downstream systems should be sourcing the required information from that system.
There are obviously many strategies for sharing/integrating information, but unless clear owners are defined, you have bigger problems that trying to get data from one system into another.
One of the biggest mistakes in the integration world, in my mind, is not to take the ownership of data into account when data is being passed from one system to another.
A classic example:
At a client I'm working for we have an application that is being fed information about employees. We manage and enrich certain aspects of the data surrounding the employee to perform certain tasks. Currently we are re-looking an integration strategy and doing some investigation surrounding the flow of information. I find them talking about a certain piece of information, let’s call it X.
Q: Do we use X anywhere
A: No.
Q: Why do we have X?
A: We need to send it to a downstream system.
Q: Do we need to pass it on?
A: Yes: Cause we don't know who they send it to.
So why do the downstream system not get it from the source or the master? Why do we need to be the source of data that we are not responsible for?
The end result of not defining clear owners of data is that you end up with very brittle point to point integrations that can get out of hand and unmanageable very quickly. There should be only one master and downstream systems should be sourcing the required information from that system.
There are obviously many strategies for sharing/integrating information, but unless clear owners are defined, you have bigger problems that trying to get data from one system into another.
Silverlight and WCF using ntlm authentication.
We have a Silverlight 3 application that makes use of wcf services in a standard 2 tier RIA application. As the application is to run on the client's intranet, available to all employees, windows authentication was the logical authentication mechanism.
Most of the samples that are available make this seem very simple and straight forward. Use a custom binding (to leverage binary message encoding for performance reasons) and set the httpTransport's authenticationScheme to Ntlm with IIS's authentication set to "Windows" and "anonymous" disabled. No Problem... or so we thought.
We just could not establish the identity of the client on the service. This is where a 3 month story starts with a support ticket logged with Microsoft the works..
Firstly the IIS would summarily deny any access to the service. Then we allow anonymous authentication on the service and obviously the identity on the current WCF OperationContext is not set. We eventually had a work-around where we injected the identity from the Silverlight’s host aspx page into Silverlight and then packaged the credentials as a custom header with each call. Not ideal, but time constraints where insane and we needed to get the first release out..
We logged a call with Microsoft to see if they could shed some light on the issue for us. We developed some sample application that low and behold worked without a hitch. We would run network traces, but for some or other reason the call from our application to the services would be stopped dead in its tracks with a 401 unauthorized. We configured IIS to only use Ntlm as the Negotiate with its default Kerberos scheme does cause problems, but to no avail.
So here is the kicker and something that is etched in my memory... I started looking at every trace I could throughout the application to determine exactly what happens in the application, start to finish, researching any and every step in detail.
I came across a single line in the start-up of the Silverlight client:
System.Net.WebRequest.RegisterPrefix("http://", System.Net.Browser.WebRequestCreator.ClientHttp);
The primary reason that this line was put in was to ease debugging of the service calls. This forces Silverlight to use its own HTTP request stack that understands valid Soap Faults where the default System.Net.Browser.WebRequestCreator.BrowserHttp will return you the notorious "NotFound" exception.
It turns out that in the msdn documentation there is a line that states "You can register an alternative HTTP stack using the RegisterPrefix method. Silverlight 3 provides the “client HTTP stack” which, unlike the “browser HTTP stack”, allows you to process SOAP-compliant fault messages. However, a potential problem of switching to the alternative HTTP stack is that information stored by the browser (such as authentication cookies) will no longer be available to Silverlight, and thus certain scenarios involving secure services might stop working, or require additional code to work.”
So after about 3 months of blood sweat and endless phone calls from Microsoft support engineers one single line was the cause of our problems..
You’ve been warned.
Subscribe to:
Posts (Atom)