|PHP Framework||PHP4||PHP5||MVC||Multiple DB’s||DB Objects||Templates||Caching||Validation||Ajax|
Search engine optimization is collection of technique that allow your site to get more traffic from search engine like Google, Microsoft Bing etc. SEO can be divided in to two main categories
- On Page SEO (Website changes that makes on your site)
- Off Page SEO (Works that takes place out of your site).
In this tutorial we will discuss brief about these method’s,
On Page SEO –
On page SEO means the work take place on your web site which help’ s to get more traffic on your site from search engine. On page SEO divided in following areas
- Title Tag – The most important part of your web page is title tag (<title></title>) because search engine crawl web sites day and night for taking the information and categories them so user can easily get what they are find or want. So make your title as clear as they have topic’s, so search engine clearly distinguish your pages from others.
- Header & Bold Tag – In web development generally HTML developer so many CSS for formatting topics of each page. It’s bad because while search engine crawl websites they simply ignore all CSS tags like <span> and etc. instead of this use header tag i.e. <h1> to <h6> to set topics of each page, while important topics put in to bold for user notifications.
- Keywords – While doing SEO for your site you must know which keyword or terms you want to target. The SEO keyword for site is divided in to three categories,
- Specific keywords e.g. – Black color shoes etc.
- Broad keywords e.g. – shoes, sports etc.
- unique keywords e.g. – unique identity of your site.
Off Page SEO –
Off page SEO is work that makes out of your site. The off page work is divided in following area
- Anchor Tag – Simply getting link from any other website or put your web site link on others web site is not enough to rank well in search engine but it’s depend on quality of your anchor text.
- Targeting Competitor – When you search for your competitor’s website, you will find out the most authoritative sites that mention your competitor. Visit these sites and try to get a link from those site’s. If it is a review site, it may be as easy as sending a free product for review. If it is a forum or wiki, it may be just as easy as adding your link.
Don’t do this all at once, Make a note all of the sites you would like to get a link from, and slowly, slowly acquire links from them. There is no set time period.
Cloud computing has been a hot topic during the last few years for technology specialists all over the world. Cloud computing has been adopted by many enterprises, but still challenges continue to rise. With all the articles and documentation on this subject, there are many myths that have developed over time. So here are the most common myths regarding cloud computing: security, data loss and performance.
Security is compromised in the cloud
Without a doubt, this is the most talked about point. In order to be a successful service provider, cloud providers have to assure the customers or prospects that their data is secure. The security risks that exist in the cloud are no different than the ones that exist in-house. The greatest advantage when outsourcing to cloud is that providers are permanently focused on improving controls and procedures so that the data is always secure, while enterprises might neglect this focus from time to time. So one could argue that a risk could be to remain in a physical environment. Most cloud computing providers also offer the customer different levels of security protection, which allows for more enhanced security.
You lose control of data in the cloud
This is another common myth. Most people think that they will not be able to access their data whenever they need because they cannot see the actual physical drive’s that the data is being stored on. With the Cloud, the technology maintenance and support issues are in the hands of the cloud hosting provider, which means a high level of availability and data. Data in cloud environments is segmented and encrypted and some providers also allow you to control how your data is stored, which would allow your data to be on a shared storage system or dedicated storage. I have worked with these types of cloud systems and think that flexibility is the future of cloud management.
Performance is a problem in the cloud
It is easier to add additional resources in a cloud environment and if deployed correctly, those resources can be balance and enable you to achieve a higher level of performance and redundancy. The latest servers built for the cloud, like the Cisco UCS that I’m familiar with run on very high performance blades that most companies do not deploy in a physical environment which has allowed us to achieve much more performance over the same systems in a physical environment. There could be some refactoring of your db’s and applications to take advantage of the cloud to receive the same benefits we have received above.
Hello everyone. In this tutorial we will be looking at setting up the Yii framework and create our skeleton app step by step. Lets begin.
Firstly, we need to download the Yii framework. This can be done from the Yii Framework website. When the download is complete, unzip the file and you will see a folder containing:
- and some legal mumbo jumbo & other stuff
The only folder we are interested in is the framework folder. Copy this folder and go to your localhost. Paste the yii framework folder in a new folder where we are going to setup our application, in this tutorial we are using: localhost/yii-demo
Cool, Now open up your command prompt and navigate to the localhost/yii-demo/framework (e.g : c:\wamp\www\yii-demo\framework
Now type in: yiic webapp c:\wamp/www/myFirstYiiApp This will ask you if you want to “Create we application under ‘C:\wamp\www\myFirstYiiApp’? [Yes:No]“
We do want to so type in “yes”. Some of you might get an error at this point saying something about php or something like that (great tutorial right?). That simply means you don’t have php setup as an environment variable.
To fix this:
“Open the Environment Variables window by going to: Start -> My Computer (right click!) -> Advanced Tab -> Environment Variables -> Click Path in System variables -> Edit.
That bit is stolen from the Yii wiki page, thanks.
If you are interested in how to create fast MySqlqueries, this article is for you
1. Use persistent connections to the database to avoid connection overhead.
2. Check all tables have PRIMARY KEYs on columns with high cardinality (many rows match the key value). Well,`gender` column has low cardinality (selectivity), unique user id column has high one and is a good candidate to become a primary key.
3. All references between different tables should usually be done with indices (which also means they must have identical data types so that joins based on the corresponding columns will be faster). Also check that fields that you often need to search in (appear frequently in WHERE, ORDER BY or GROUP BY clauses) have indices, but don’t add too many: the worst thing you can do is to add an index on every column of a table (I haven’t seen a table with more than 5 indices for a table, even 20-30 columns big). If you never refer to a column in comparisons, there’s no need to index it.
4. Using simpler permissions when you issue GRANT statements enables MySQL to reduce permission-checking overhead when clients execute statements.
5. Use less RAM per row by declaring columns only as large as they need to be to hold the values stored in them.
6. Use leftmost index prefix — in MySQL you can define index on several columns so that left part of that index can be used a separate one so that you need less indices.
7. When your index consists of many columns, why not to create a hash column which is short, reasonably unique, and indexed? Then your query will look like:
8. Consider running ANALYZE TABLE (or myisamchk –analyze from command line) on a table after it has been loaded with data to help MySQL better optimize queries.
9. Use CHAR type when possible (instead of VARCHAR, BLOB or TEXT) — when values of a column have constant length: MD5-hash (32 symbols), ICAO or IATA airport code (4 and 3 symbols), BIC bank code (3 symbols), etc. Data in CHAR columns can be found faster rather than in variable length data types columns.
10. Don’t split a table if you just have too many columns. In accessing a row, the biggest performance hit is the disk seek needed to find the first byte of the row.
11. A column must be declared as NOT NULL if it really is — thus you speed up table traversing a bit.
12. If you usually retrieve rows in the same order like expr1, expr2, …, make ALTER TABLE … ORDER BY expr1, expr2, … to optimize the table.
13. Don’t use PHP loop to fetch rows from database one by one just because you can — use IN instead
14. Use column default value, and insert only those values that differs from the default. This reduces the query parsing time.
15. Use INSERT DELAYED or INSERT LOW_PRIORITY (for MyISAM) to write to your change log table. Also, if it’s MyISAM, you can add DELAY_KEY_WRITE=1 option — this makes index updates faster because they are not flushed to disk until the table is closed.
16. Think of storing users sessions data (or any other non-critical data) in MEMORY table — it’s very fast.
17. For your web application, images and other binary assets should normally be stored as files. That is, store only a reference to the file rather than the file itself in the database.
18. If you have to store big amounts of textual data, consider using BLOB column to contain compressed data (MySQL’s COMPRESS() seems to be slow, so gzipping at PHP side may help) and decompressing the contents at application server side. Anyway, it must be bench-marked.
19. If you often need to calculate COUNT or SUM based on information from a lot of rows (articles rating, poll votes, user registrations count, etc.), it makes sense to create a separate table and update the counter in real time, which is much faster. If you need to collect statistics from huge log tables, take advantage of using a summary table instead of scanning the entire log table every time.
20. Don’t use REPLACE (which is DELETE+INSERT and wastes ids): use INSERT … ON DUPLICATE KEY UPDATE instead (i.e. it’s INSERT + UPDATE if conflict takes place). The same technique can be used when you need first make a SELECT to find out if data is already in database, and then run either INSERT or UPDATE. Why to choose yourself — rely on database side.
21. Tune MySQL caching: allocate enough memory for the buffer (e.g. SET GLOBAL query_cache_size = 1000000) and define query_cache_min_res_unit depending on average query resultset size.
22. Divide complex queries into several simpler ones — they have more chances to be cached, so will be quicker.
23. Group several similar INSERTs in one long INSERT with multiple Values lists to insert several rows at a time: query will be quicker due to fact that connection + sending + parsing a query takes 5-7 times of actual data insertion (depending on row size). If that is not possible, use START TRANSACTION and COMMIT, if your database is InnoDB, otherwise use LOCK TABLES — this benefits performance because the index buffer is flushed to disk only once, after all INSERT statements have completed; in this case unlock your tables each 1000 rows or so to allow other threads access to the table.
24. When loading a table from a text file, use LOAD DATA INFILE (ormy tool for that), it’s 20-100 times faster.
25. Log slow queries on your dev/beta environment and investigate them. This way you can catch queries which execution time is high, those that don’t use indexes, and also — slow administrative statements (like OPTIMIZE TABLE and ANALYZE TABLE)
26. Tune your database server parameters: for example, increase buffers size.
27. If you have lots of DELETEs in your application, or updates of dynamic format rows (if you have VARCHAR, BLOB or TEXT column, the row has dynamic format) of your MyISAM table to a longer total length (which may split the row), schedule running OptimizeTABLE query every weekend by crone. Thus you make the defragmentation, which means more speed of queries. If you don’t use replication, add LOCAL keyword to make it faster.
28. Don’t use ORDER BY RAND() to fetch several random rows. Fetch 10-20 entries (last by time added or ID) and make array_random() on PHP side. There are also other solutions.
29. Consider avoiding using of HAVING clause — it’s rather slow.
30. In most cases, a DISTINCT clause can be considered as a special case of GROUP BY; so the optimizations applicable to GROUP BY queries can be also applied to queries with a DISTINCT clause. Also, if you use DISTINCT, try to use LIMIT (MySQL stops as soon as it finds row_count unique rows) and avoid ORDER BY (it requires a temporary table in many cases).
31. When I read “Building scalable web sites”, I found that it worth sometimes to de-normalise some tables (Flickr does this), i.e. duplicate some data in several tables to avoid JOINs which are expensive. You can support data integrity with foreign keys or triggers.
32. If you want to test a specific MySQL function or expression, use Benchmark function to do that.