Being Ignorant with LINQ to SQL

Before we begin

Most of the code here was written on the May 2006 CTP. That’ s just because I am moving house and do not have access to my Orcas beta at the moment. It is also a preliminary discussion that I hope will be followed by more detailed analysis, as time permits.

Models and Ideologies

LINQ to SQL differs from LINQ to Objects in that it is about working with objects that are persistent; in other words have a lifetime beyond the application. There are different styles to dealing with persistence; preference for one or the other is usually dependent on whether our perspective is data-centric or domain-centric.

Data-centric designs tend to flow the relational model into the code. Datasets are a good example of a data-centric approach: they preserve much of the relational structure within their representation. Working with these models often leads to a Transaction Script or Table Module design. People who do not like Datasets sometimes use a Row Data Gateway pattern; they remain data-centric in origin though because they do not focus on behavior, but state; thus, the domain logic tends to live outside the object, which the domain-centric advocates characterize as the anti-pattern of an anemic domain model.

Those who tend to be domain-centric flow the domain model out to their persistent store. In most cases that persistent store is a relational database. Domain-centric developers need to translate between the two models. The need to translate is called the object-relational impedance mismatch: Data Mapper or Active Record are the usual approaches to this translation. The problem is well known enough for there to be tools to solve the problem, and developers should rarely need to write their own solution to this. At this point the tools developed to overcome this problem are at this time all outside the Microsoft space such as NHibernate and XPO. One, Wilson O/R Mapper, was inspired by a previous attempt by Microsoft to produce an ORM tool in .NET 2.0: ObjectSpaces.

Update: Greg Young points out that Active Record is a data centric approach because you don’t do any mapping, there is a one-to-one correspondence between your Active Record and the Table. Fowler says "The data structure of the Active Record should exactly match that of the database: one field in the class for each column in the table." Greg has a a good point here as we are still thinking in terms of database schema.

LINQ to SQL provides the first MS attempts to target ORM shipping with the .NET framework. I would typify it as a domain-centric tool because of its design goal of making it possible to share on query syntax across many collection types and in the feature set provided by data context.

Because many MS developers are more comfortable with the data-centric world I wanted to give a domain-centric developers approach to working with LINQ.

Cataloguing the LINQ to SQL Feature Set

At this point let’s try to identify correspondences between LINQ to SQL and the catalogue of patterns Fowler identifies in Patterns of Enterprise Application Development. Patterns give us a good shorthand for discussing technology and identifying what patterns are in play allows us to build from our understanding of usage of those patterns when using LINQ.

LINQ to SQL uses the DataMapper architectural pattern. A DataContext holds a collection of type Table<T>, where T is the type we are persisting to the DB. We write LINQ queries against the Table<> object which is an IQueryable<T>. LINQ turns our query into an expression tree and generates the appropriate SQL for the query. So IQueryable resembles a Query Object; an IQueryable<T> also resembles what Evans calls a Specification in Domain Driven Design in its composability. Collectively the DataContext and Table<T> to do Metadata Mapping to handle loading and saving of objects from the store. LINQ to SQL uses a reflection based approach over code-generation for mapping. When working with LINQ to SQL you write queries and perform persistence operations against a DataContext. DataContext implements the Unit of Work pattern; LINQ provides an Identity Map, a consequence of providing a unit of work, and also provides support for Lazy Loading. LINQ supports Foreign Key Mapping through the EntitySet and EntityRef collection types. It does not support eliding the many-to-many table in Association Table Mapping. It does not support Dependent Mapping, but as it implements a unit of work, this is not surprising. LINQ to SQL also supports inheritance with one table per inheritance hierarchy with subtypes identified via a discriminator column. This is usually called Single Table Inheritance as opposed to a table per type (Class Table Inheritance) or a table per concrete type (Concrete Table Inheritance). I have been used to working with Wilson O/R mapper which also only supports Single Table Inheritance and never found it a serious limitation provided you are not working with a legacy schema.

Using LINQ for domain centric development

The selling point for domain-centric approaches, for example Test-Driven Development and Domain Driven Design, is that they provide a lower total cost of ownership. The code is clean, easy to refactor, by implication means easier to test, and better encapsulates and expresses the domain knowledge: which is the true value that application developers are capturing.

Persistence ignorance, a domain driven design principle, states that persistence code is orthogonal to domain logic, and a domain object should not care how it persists itself; instead it should rely on an infrastructure service to persist its state. Persistence ignorance is an extension of the single-responsibility principle that a class should have one and only-one reason to change. Bearing responsibility for both persistence and and domain logic violates this principle. Data Mapper is truly ignorant in a way that Active Record is not. In his book Applying Domain Driven Design and Patterns: Using .NET Jimmy Nilsson identifies the following characteristics as things you should not have to do in persistence ignorance:

  • Inherit from a certain base class (besides object)
  • Only instantiate via a provided factory
  • Use specially provided datatypes, such as for collections
  • Implement a specific interface
  • Provide specific constructors
  • Provide mandatory specific fields
  • Avoid certain constructs
  • Write database code such as calls to stored procedures in your Domain Model classes

Some people also talk about POCO when referring to this issue or plain old-CLR objects. The POCO approach comes from the POJO movement in Java that emerged as a reaction to over-complexity caused by J2EE. It originally emerged to identify non-serviced components and the overloading of the term to imply a persistence ignorant object seems to have come from NHibernate and to mean Plain Old C# objects. I’ll stick to persistence ignorance to avoid any of the confusion that trails around what we mean by POCO.

Linq to SQL conformance to the goals of Persistence Ignorance

Running through Jimmy’s list we can catalogue to what extent LINQ to SQL enables a PI approach to development.

Inherit from a certain base class (besides object)

We do not have to inherit from a certain base class on objects we wish to persist i.e. this is not an Active Record pattern.

Only instantiate via a provided factory

There is no requirement to instantiate via a factory. This requirement usually exists because it is the only way that the infrastructure can track the entities. With LINQ we can add or attach entities to the unit of work when we want the infrastructure to be aware of them (add is for new entities, attach for existing ones – the difference is effectively insert or update).

Use specially provided datatypes, such as for collections

We do need to use EntityRef and EntitySet when representing associations. This restriction is not uncommon in ORM tools, where providing Lazy Loading often requires a collection to possibly be a proxy that only retrieves the underlying values when they are accessed. LINQ does not hit this particular bar, though it is possible to make only the private field providing the storage aware of this requirement and expose an IList<T> from a property.

Implement a specific interface

There is no requirement for persistable types to implement a specific interface. Collections need to implement IEnumerable<T> or IQueryable<T> to work with LINQ, but this is true whether we are persisting or not i.e. it is a feature of LINQ in general and not of LINQ to SQL specifically. In addition, all collections tend to implement IEnumerable<T> to support foreach, and there is a conversion from IEnumerable to IEnumerable<T> (OfType) for older non-generic collections.

Provide specific constructors

This requirement is associated with the need for the framework to create instances of your type. You will need a default constructor, which as ever is only an issue once you add a specific constructor, to allow creation of objects by the framework.

Provide mandatory specific fields

This requirement is usually associated with tracking whether objects are dirty. There is no need to provide specific fields with LINQ to SQL as the unit of work will determine if an object is dirty by comparing the state of the persisted object against a copy of the state taken when it was loaded to determine what has changed and therefore what SQL to generate. This is less optimized because it involves a comparison and so LINQ to SQL does provide a notification mechanism which allows an entity to notify the unit of work that it has changed. This supports non-PI usages; I would tend not to bother with this unless I was aware that a specific entity had performance issues that use of this mechanism would solve, and that the loss of PI was worth the pain.

Avoid certain constructs

LINQ to SQL does not require you to avoid certain constructs or programming idioms.

Write database code such as calls to stored procedures in your Domain Model classes

The point of LINQ to SQL is to replace the need for stored procedure code, so there is no need to include any stored procedure code in the domain model.

LINQ to SQL scores pretty well against the PI checklist. As always there are trade-offs where performance can be obtained by specific features. It would be nice if we could choose to trade off lazy loading for standard collections so that we could obviate the need to use specific collection types for associations unless we needed lazy loading, but otherwise there is nothing to complain about here.

Code Generation

This article is about a TDD approach to using LINQ which means that I am not using the code-generation made available through the designers in Orcas.

Code generation is the reaction to the realization that a lot of code could more quickly described than written. For those who adopt a data-centric approach the design work is around creating our data model; once we have that model, mapping the set of classes from it seems to be a mechanical process and one we could automate to generate the appropriate artifacts. For those who adopt a domain-centric approach the design work is around creating the domain model; once we have that model creating the schema would seem to be a mechanical operation which we could automate to produce the appropriate artifacts.

So a lot of people use code generation when developing their data access layer. I’m always cautious around generated code or databases: it is often when we devalue the code that we choose to generate it. So the data-centric designers happily generate the data access layer in code and the domain centric developers would happily generate the db; because they have devalued these aspects of the product. I think that both sides can miss something by doing this as the tools are often blunt-edged and the resulting code not clean. In addition regen of the generated code always tends to cause issues because that generated code is coupled to un-generated code, even if people refrain from editing the generated artifacts. Merging development streams often becomes an issue too.

Code-generation is about productivity but when we looking at productivity we also need to look at maintainability because productivity over the whole lifetime of the application is important. The best way I know to make working with a codebase productive is to make it easy to refactor; which implies that we have good automated tests hopefully from test-driven development. My experience is that once you adopt TDD you tend to do less code-generation and make more use of generics and reflection to solve similar classes of problems, because this removes the problem of finding an appropriate testing strategy for all those generated artifacts.

SQLMetal provides code generation support for strongly typed data-contexts in LINQ to SQL (for both mapping file and attribute based approaches); Orcas will ship with designers for people who don’t like working with a command line. I prefer to avoid them for anything that is not demo based or first-cut.

Test-Driven Development and LINQ

While writing unit tests, we could access our data directly through LINQ’s DataContext. Our problem here is that we do not want to access the database during a unit test. Why? Because such tests tend to be slow. When you are running hundreds of tests (and you will be if you are doing TDD) then repeated data access will mean it takes so long to run the unit tests that developers will resist doing it, because it soaks up too much time and breaks their rhythm. In addition tests that talk to the DB are fragile, because we have to set the state of the db before and after our test run. it becomes painful if we have to set up a collection of data to test some orthogonal business logic. Our usual solution to this problem is to use the Repository pattern to abstract out where the data is persisted. This allows us to use an in-memory collection for our unit testing and a db collection for our integration and acceptance tests. TDD is actually flushing out bad design here by forcing us to decouple our domain model from the underlying infrastructure services that provide persistence.

Usually our practice is to provide an interface for the Repository type and then implement a version that works with an in-memory collection and a version that really talks to the DB for use at run-time (and for functional testing).

This of course fits with the notion that a "Repository mediates between the domain and data mapping layers, acting like an in-memory domain object collection".

But we might note that LINQ syntax treats both object and SQL collections in a similar fashion – in fact that is one of its design goals; so it would seem obvious to try and lean on LINQ to provide us with the ability of our repository to swap the collection we are iterating over at run-time between in-memory and DB versions. The advantage here is that we should also be able to get LINQ to provide ad-hoc querying of that repository, without caring whether the repository refers to a db collection or an in-memory collection. This is a truer form of a repository that one in which we have differing implementations for DB and test collections.

To pull of this repository switch we are going to rely on the magic of IQueryable, and in particular the ability to represent an IEnumerable<T> as an IQueryable<T> through the extension method System.Query.Queryable.ToQueryable(this IEnumerable source). See Matt Warren’s article on IQueryable and Mike Taulty’s article on Deconstructing LINQ to SQL.

Assume that I have a test as follows:

[TestMethod]
public void FindCustomer()
{

CustomerRepository customerRepository = new CustomerRepository();

InitializeTestData(customerRepository);

Customer customer = customerRepository.FindCustomer("ALFKI");

Assert.IsNotNull(customer);

}

and our goal is an implementation of CustomerRepository that looks something like this:

public class CustomerRepository
{
private IQueryable<Customer> customers;

public CustomerRepository() {}

public CustomerRepository(DataContext context)
{
customers = context.GetTable<Customer>();
}

public Customer FindCustomer(string customerId)
{
return (from c in customers
where c.CustomerID == customerId
select c).Single<Customer>();
}

public IQueryable<Customer> Customers
{
get
{
return customers;
}
set
{
customers = value;
}
}
}

and a Customer class that looks like this (note how it is clean of persistence information):

public class Customer
{
private List<Order> _Orders = new List<Order>();
public string CustomerID;
public string CompanyName;
public string ContactName;
public string ContactTitle;
public string Address;
public string City;
public string Region;
public string PostalCode;
public string Country;
public string Phone;
public string Fax;

public Customer(string customerID)
{
CustomerID = customerID;
}

public Customer() {}

public IList<Order> Orders
{
get
{
return _Orders;
}
set
{
_Orders = value;
}
}
}

Then we could create a method to initialize our test data within the repository as follows, so that we swap LINQ to Objects for LINQ to SQL for unit testing purposes:

private void InitializeTestData(NorthwindRepository customerRepository )
{
customerRepository .Customers = new List<Customer>
{
new Customer() {CustomerID = "ALFKI", CompanyName = "Alfreds Futterkiste", ContactTitle = "Sales Representative",
Address="Obere Str. 57", City="Berlin", PostalCode = "12209", Country="Germany", Phone="030-0074321", Fax="030-0076545"}
}.ToQueryable();
}

and alter our test as follows:

[TestMethod]
public void FindCustomer()
{
CustomerRepository customerRepository = new CustomerRepository ();

InitializeTestData(customerRepository );

Customer customer = customerRepository .FindCustomer("ALFKI");

Assert.IsNotNull(customer);
}

Our test now doesn’t need to hit the DB to check that the query we have specified works – we just need to hit the in-memory collection. This ability to swap between LINQ to Objects and LINQ to SQL capitalizes on the commonality of LINQ, allowing us to write a type-safe query that works in both contexts.

Doing Persistence

Now that we have a version that works in-memory we want to hook it up to the DB, using the DataContext provided by LINQ to SQL.

Our Customer class need to change a little to make it persist with LINQ to SQL. We have to derive the underlying storage for classes involved in an association from new collection types:

public class Customer
{
private EntitySet<Order> _Orders = new EntitySet<Order>();

public IList<Order> Orders
{
get
{
return _Orders;
}
set
{
_Orders.Assign(value);
}
}
}

The important thing to understand here is that I can complete my domain model working with LINQ to Objects and once, and only once, I am happy that it is right, then create the mapping to the DB. If I change the underlying types here, my unit tests should still pass.

Of course our functional tests do need to hit the DB, so we need to be able to pass a properly initialized DataContext to the repository to enable this. At that point I tend to add a few integration tests whose purpose is to check the mapping, but I should not need to write tests for all of the repository’s queries. Once you create a persistence model the cost of changing your domain model tends to rise, because you have to change the data schema too. That cost can become off-putting, which deters change and leads to software rot. So its good to keep our model malleable as long as possible.

We want to use the following mapping file to map between our Customer class and the DB. Attributes tend to be the default for LINQ but XML mapping files are supported. Your choice on attributes vs. XML schema is really a style issue; I prefer the clean lines of un-attributed class code and consider it more in the spirit of PI, but others prefer dealing with attributes to XML. I’m not too worried about ‘configuration file hell’ and feel comfortable that I can ‘see’ the mapping more easily in this form, but your preferences may differ. This is the XML file I am using here to map Customer to the DB.

<?xml version="1.0" encoding="utf-8"?>
<Database xmlns:xsi="
http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" Name="Northwind">
<Table Name="Customers">
<Type Name="Customer">
<Column Name="CustomerID" Member="CustomerID" IsIdentity="True" />
<Column Name="CompanyName" Member="CompanyName"/>
<Column Name="ContactName" Member="ContactName"/>
<Column Name="ContactTitle" Member="ContactTitle"/>
<Column Name="Address" Member="Address"/>
<Column Name="City" Member="City"/>
<Column Name="Region" Member="Region"/>
<Column Name="PostalCode" Member="PostalCode" />
<Column Name="Country" Member="Country"/>
<Column Name="Phone" Member="Phone"/>
<Column Name="Fax" Member="Fax"/>
<Association Name="FK_Orders_Customers" Member="Orders" Storage="_Orders" ThisKey="CustomerID" OtherTable="Orders" OtherKey="CustomerID" />
</Type>
</Table>

</Database>

I tend to embed this resource in the assembly to make shipping easier, and use a helper class to load that embedded resource to pass to the DataContext:

public static class Mapping
{
public static XmlMappingSource GetMapping()
{
XmlMappingSource mapping;
using (Stream stream = Assembly.GetExecutingAssembly().GetManifestResourceStream("Ignorant.Mapping.NorthWind.map"))
{
mapping = XmlMappingSource.FromStream(stream);
}

return mapping;
}
}

With our new version of the test we add an additional constructor to our repository. This one takes a DataContext which we can then use that to initialize our properties (this is just inversion of control).

public CustomerRepository (DataContext context)
{
customers = context.GetTable<Customer>();
}

and then create a slow or integration test like the following to check we are hooked up correctly (we don’t want to test LINQ, just our mapping)

[TestMethod]
public void FindCustomer()
{
DataContext context = new DataContext(ConfigurationManager.ConnectionStrings["NorthWind"].ConnectionString, Mapping.GetMapping());
CustomerRepository customerRepository= new CustomerRepository (context);

Customer customer = customerRepository.FindCustomer("ALFKI");

Assert.IsNotNull(customer);
}

The real asset here is that we can get much more tested under our unit testing umbrella, because we only test the expression not the implementation from the expression tree so we don’t cross the boundaries from being a unit test.

A spanner in the works

However that is not all we want to be able to do. The purpose of a repository is not just to support querying but all the other things we could do to a collection: add, update, or delete. We want to write something like this:

[TestMethod]
public void AddCustomer()
{
CustomerRepository customerRepository = new CustomerRepository(context);

Customer customer = new Customer();
customerRepository.Customers.Add(customer);

…test that the customer is in the repository using a LINQ expression…

}

But we can’t do this as IQueryable<T> does not support Add. DataContext actually gives us a Table<T> which both implements IQueryable<T> and supports the required functionality for add and remove; but we can’t convert our List<T> into a Table<T> so we look to be out-of-luck with this.

There is nothing new under the sun

Let’s think again. What we are trying to do is switch the infrastructure services that our repository is implemented in terms of. For LINQ to SQL we want to use DataContext but we want to swap that with a Test Double for testing purposes. Jimmy Nilsson takes just this approach in Applying Domain Driven Design and Patterns: Using .NET, so that the repository code itself is under unit test, only the infrastructure services are not. This is exactly what we are looking to achieve; we want to test the queries that the repository provides, without having to pay the price of slow (and hard to maintain) tests that it entails once we hook it up to the DB.

We define two interfaces to implement – following the same pattern to LINQ to SQL – one for a collection of elements, the other for the unit of work that contains them. The advantage of separating the two is that we can submit changes for multiple collections at the same time.

public interface IUnitofWork
{
ITable<Customer> Customers {get;set;}

void SubmitChanges();
}

public interface ITable<T> : IQueryable<T>, IEnumerable<T>
{
void Add(T item);
void Attach(T item);
void Remove(T item);
void RemoveAll(IEnumerable<T> items);
}

It is fairly simple to implement these for an in-memory collection, and if we use generics we really only need to write this wrapper once. I have shortened the full-implementation but the pattern is the same throughout – defer to the underlying implementation. The only trick is the use of ToQueryable() to provide the implementation of the IQueryable<T> interface.

public class FakeTable<T> : ITable<T>
{
List<T> impl;
IQueryable<T> queryableImpl;

public FakeTable(List<T> source)
{
impl = source;
queryableImpl = impl.ToQueryable();
}

public void Add(T item)
{
impl.Add(item);
}

public IQueryable<S> CreateQuery<S>(System.Expressions.Expression expression)
{
return queryableImpl.CreateQuery<S>(expression);
}

IEnumerator<T> IEnumerable<T>.GetEnumerator()
{
return impl.GetEnumerator();
}

}

Our unit tests, then look like this. Again, note how we are checking our LINQ expressions, but through LINQ to Objects:

[TestMethod]
public void FindCustomer()
{
CustomerRepository customerRepository = new CustomerRepository(new FakeUnitOfWork());

InitializeTestData(customerRepository);

Customer customer = customerRepository.FindCustomer("ALFKI");

Assert.IsNotNull(customer);
}

[TestMethod]
public void AddCustomer()
{
FakeUnitOfWork unitOfWork = new FakeUnitOfWork();
CustomerRepository customerRepository = new CustomerRepository(unitOfWork);

Customer customer = new Customer();
string customerID = "XXXXX"
customer.CustomerID = customerID;
customer.CompanyName = "AnyCompany"
customerRepository.Customers.Add(customer);

unitOfWork.SubmitChanges();
Customer foundCustomer =
(from c in customerRepository.Customers
where c.CustomerID == customerID
select c).Single<Customer>();

Assert.AreSame(customer, foundCustomer);
}

private void InitializeTestData(CustomerRepository northwindRepository)
{
northwindRepository.Customers = new FakeTable<Customer>(new List<Customer>
{
new Customer() {CustomerID = "ALFKI", CompanyName = "Alfreds Futterkiste", ContactTitle = "Sales Representative",
Address="Obere Str. 57", City="Berlin", PostalCode = "12209", Country="Germany", Phone="030-0074321", Fax="030-0076545"}
});
}

The reality is that the AddCustomer test is not that useful as it does not really test anything (and the call to SubmitChanges is a do nothing operation). The value here is just that I’ll show you the same test again as an integration test, so you can see we can swap between LINQ to Objects and LINQ to SQL. The FindCustomer test does have value however, because we can exercise our LINQ expression against Objects which is cheap, instead of against SQL.

The integration tests then look like:

[TestMethod]
public void FindCustomer()
{

DataContext context = new DataContext(ConfigurationManager.ConnectionStrings["NorthWind"].ConnectionString, Mapping.GetMapping());
CustomerRepository customerRepository = new CustomerRepository(new UnitOfWork(context));

Customer customer = customerRepository.FindCustomer("ALFKI");

Assert.IsNotNull(customer);
}

[TestMethod]
public void AddCustomer()
{
using(new TransactionScope())
{
DataContext context = new DataContext(ConfigurationManager.ConnectionStrings["NorthWind"].ConnectionString, Mapping.GetMapping());
UnitOfWork unitOfWork = new UnitOfWork(context);
CustomerRepository customerRepository = new CustomerRepository(unitOfWork);

Customer customer = new Customer();
string customerID = "XXXXX"
customer.CustomerID = customerID;
customer.CompanyName = "AnyCompany"
customerRepository.Customers.Add(customer);

unitOfWork.SubmitChanges();
Customer foundCustomer =
(from c in customerRepository.Customers
where c.CustomerID == customerID
select c).Single<Customer>();

Assert.AreSame(customer, foundCustomer);
}
}

These tests exercise the same functionality, but do so against LINQ to SQL, so at this point we can test that our mappings are correct and that we persist correctly to the DB.

As an aside, I’m using TransactionScope here to ensure that the writes to the DB are only transient, so that we can repeat the test and isolate changes from other tests. The transaction automatically rolls back at the end of the block, because we never call Complete. This is a variation of Roy Osherove’s COM+ transaction approach, but leverages the functionality of TransactionScope, so we should not need promotion to an OleTx transaction with the additional overhead that requires (and consequent debugging issues). If you are working with .NET 2.0 I would recommend the TransactionScope approach over use of COM+ transactions.

Conclusion

LINQ to SQL is usable with a TDD/DDD approach to development. Indeed the ability to swap between LINQ to Objects and LINQ to SQL promises to make much more of the code easily testable via unit tests than before.

What’s next?

I have not pushed this technique that far so it is possible that the ability to swap between LINQ to Objects and LINQ to SQL hits limits. Hopefully further research will highlight those. In addition I would like to look at the relationship between Evans pattern of a Specification and IQueryable<T>.

About these ads
This entry was posted in Uncategorized. Bookmark the permalink.

100 Responses to Being Ignorant with LINQ to SQL

  1. Unknown says:

    Helo, your blog is really good, I like it very much!By the way, if you like nike chaussures tn you can come here to have a look!http://www.tnchaussurescom.comhttp://www.sunglassesol.net

  2. says:

    http://www.batteries-shop.nethttp://www.batterieslaptop.nethttp://www.uk-laptopbattery.comhttp://www.wt-batteries.comhttp://www.cheapteastore.comhttp://www.laptopbatterystore.co.ukhttp://www.batterieslaptop.nethttp://www.batteries-supply.comhttp://www.powertoolsbatteries.co.ukhttp://www.batterygrip.orghttp://www.uk-batteries.co.ukhttp://www.ukbatterystore.co.ukhttp://www.topbatteries.co.ukhttp://www.us-battery.comhttp://www.batteries-shop.net/acer-laptop-battery-c-2.htmlhttp://www.batteries-shop.net/apple-laptop-battery-c-3.htmlhttp://www.batteries-shop.net/asus-laptop-battery-c-4.htmlhttp://www.batteries-shop.net/compaq-laptop-battery-c-5.htmlhttp://www.batteries-shop.net/dell-laptop-battery-c-6.htmlhttp://www.batteries-shop.net/fujitsu-laptop-battery-c-7.htmlhttp://www.batteries-shop.net/gateway-laptop-battery-c-8.htmlhttp://www.batteries-shop.net/hp-laptop-battery-c-9.htmlhttp://www.batteries-shop.net/hp-compaq-laptop-battery-c-10.htmlhttp://www.batteries-shop.net/ibm-laptop-battery-c-11.htmlhttp://www.batteries-shop.net/lenovo-laptop-battery-c-12.htmlhttp://www.batteries-shop.net/lg-laptop-battery-c-13.htmlhttp://www.batteries-shop.net/panasonic-laptop-battery-c-14.htmlhttp://www.batteries-shop.net/samsung-laptop-battery-c-15.htmlhttp://www.batteries-shop.net/sony-laptop-battery-c-16.htmlhttp://www.batteries-shop.net/toshiba-laptop-battery-c-17.htmlhttp://www.uk-laptopbattery.com/acer-laptop-battery-c-2.htmlhttp://www.uk-laptopbattery.com/apple-laptop-battery-c-3.htmlhttp://www.uk-laptopbattery.com/asus-laptop-battery-c-4.htmlhttp://www.uk-laptopbattery.com/compaq-laptop-battery-c-5.htmlhttp://www.uk-laptopbattery.com/dell-laptop-battery-c-6.htmlhttp://www.uk-laptopbattery.com/fujitsu-laptop-battery-c-7.htmlhttp://www.uk-laptopbattery.com/gateway-laptop-battery-c-8.htmlhttp://www.uk-laptopbattery.com/hp-laptop-battery-c-9.html

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s