Order by desc/asc limit M是我在mysql sql优化中经常遇到的一种场景,其优化原理也非常的简单,就是利用索引的有序性,优化器沿着索引的顺序扫描,在扫描到符合条件的M行数据后,停止扫描;看起来非常的简单,但是我经常看到很多性能较差的sql没有利用这个优化规律,下面将结合一些实际的案例来分析说明:
案例一:
一条sql执行非常的慢,执行时间为:
root@test 02:00:44 SELECT * FROM test_order_desc WHERE END_TIME>now() ORDER BY GMT_CREATE DESC,count_num DESC LIMIT 12, 12; +---------+-----------+------------+------+---------------------+---------------------+------------------- Data1..................................................................................................... Data2..................................................................................................... +---------+-----------+------------+------+---------------------+---------------------+------------------- 12 ROWS IN SET (0.49 sec)
执行计划如下:
root@test_db01:53:23 EXPLAIN SELECT * FROM test_order_desc WHERE END_TIME > now() ORDER BY GMT_CREATE DESC,count_num DESC LIMIT 12, 12; +----+-------------+----------+-------+-----------------+-----------------+---------+------+--------+----- | id | select_type | TABLE | TYPE | possible_keys | KEY | key_len | REF | ROWS | Extra | +----+-------------+----------+-------+-----------------+-----------------+---------+------+--------+----- | 1 | SIMPLE | test_order_desc | range | ind_hot_endtime | ind_hot_endtime | 9 | NULL | 113549 | USING WHERE; USING filesort | +----+-------------+----------+-------+-----------------+-----------------+---------+------+--------+-----
Ind_hot_endtime索引为:
root@test_db01:52:45:SHOW INDEX FROM test_order_desc; Ind_hot_endtime(end_time,count_num)
在注意到sql中满足过滤条件end_time>now()的有113549行,在加上剩余的条件中含有order by,这样会造成排序的结果集非常的大,执行非常的耗费资源;于是分析sql,在sql中包括了order by desc limit这样的排序条件后,新增适当的索引满足排序的条件,同时由于有limit的限制结果集,当扫描到满足条件的行数后退出查询,那么我们来看看优化效果:
添加索引:
root@test 02:01:06:ALTER TABLE test_order_desc ADD INDEX ind_gmt_create(gmt_create,count_num); Query OK, 211945 ROWS affected (6.71 sec) Records: 211945 Duplicates: 0 Warnings: 0
再次执行sql,观察其执行时间:
root@test 02:01:35: SELECT * FROM test_order_desc WHERE END_TIME > now() ORDER BY GMT_CREATE DESC,count_num DESC LIMIT 12, 12; +---------+-----------+------------+------+---------------------+---------------------+ col2................................................................................... +---------+-----------+------------+------+---------------------+---------------------+ Data1.................................................................................. Data2.................................................................................. +---------+-----------+------------+------+---------------------+---------------------+ 12 ROWS IN SET (0.00 sec)
可以看到执行时间已经降到了毫秒以下,查看其执行计划:
root@test 02:01:42: EXPLAIN SELECT * FROM test_order_desc WHERE END_TIME > now() ORDER BY GMT_CREATE DESC,count_num DESC LIMIT 12, 12; +----+-------------+----------+-------+-----------------+----------------+---------+------+------+-------------+ | id | select_type | TABLE | TYPE | possible_keys | KEY | key_len | REF | ROWS | Extra | +----+-------------+----------+-------+-----------------+----------------+---------+------+------+-------- | 1 | SIMPLE | test_order_desc | INDEX | ind_hot_endtime | ind_gmt_create | 14 | NULL | 48 | USING WHERE |
可以看到优化器已经选择了ind_gmt_create索引扫描,这样的话就避免了对结果集进行排序的过程,同时优化器预估扫描14行数据就会得到满足查询条件的数据(END_TIME > now()),执行计划非常的理想。
root@127.0.0.1 : test_db 16:05:15: EXPLAIN SELECT b.*,a.*,k.* FROM instance b LEFT OUTER JOIN image a ON b.image_id=a.image_id LEFT OUTER JOIN key_pair k ON b.key_pair_id=k.key_pair_id LEFT OUTER JOIN region_alias r_a ON r_a.region_no=b.region_no WHERE b.STATUS IN (1,8) AND b.user_id = 21 AND r_a.big_region_no='regeion_xx' ORDER BY b.instance_no ASC LIMIT 37300,50;
案例二:
root@127.0.0.1 : test_db 16:05:15: EXPLAIN SELECT b.*,a.*,k.* FROM instance b LEFT OUTER JOIN image a ON b.image_id=a.image_id LEFT OUTER JOIN key_pair k ON b.key_pair_id=k.key_pair_id LEFT OUTER JOIN region_alias r_a ON r_a.region_no=b.region_no WHERE b.STATUS IN (1,8) AND b.user_id = 21 AND r_a.big_region_no='regeion_xx' ORDER BY b.instance_no ASC LIMIT 37300,50;
B表的idx_uid_stat_inid的索引列包括了(user_id,status,instance_no):
我们从执行计划上分析来看,表的连接顺序为:b—>r_a—>a—>k,可以看到执行计划的第一行中需要扫描49212行的数据,同时由于status采用的是in的方式,instance_no即使在索引中也用不上,这样就导致了排序使用到了临时表,这也是导致sql执行慢的原因。我们看到sql中的最后一个排序为order by b.instance_no asc limit 37300,50,这里我们好像可以看到优化的曙光,调整数据库的索引以满足B表的排序需求:
root@127.0.0.1 : test_db 16:05:04 ALTER TABLE instance ADD INDEX ind_user_id(user_id,instance_no); Query OK, 0 ROWS affected (0.56 sec)
调整索引后查看执行计划:
root@127.0.0.1 : test_db 16:09:42 EXPLAIN SELECT b.*,a.*,k.* FROM instance b LEFT OUTER JOIN image a ON b.image_id=a.image_id LEFT OUTER JOIN key_pair k ON b.key_pair_id=k.key_pair_id LEFT OUTER JOIN region_alias r_a ON r_a.region_no=b.region_no WHERE b.STATUS IN (1,8) AND b.user_id = 21 AND r_a.big_region_no='regeion_xx' ORDER BY b.instance_no ASC LIMIT 37300,50;
我们加上force index强制走我们新加的索引:
root@127.0.0.1 : test_db 16:10:24 EXPLAIN SELECT b.*,a.*,k.* FROM instance b force INDEX (ind_user_id) LEFT OUTER JOIN image a ON b.image_id=a.image_id LEFT OUTER JOIN key_pair k ON b.key_pair_id=k.key_pair_id LEFT OUTER JOIN region_alias r_a ON r_a.region_no=b.region_no WHERE b.STATUS IN (1,8) AND b.user_id = 21 AND r_a.big_region_no='regeion_xx' ORDER BY b.instance_no ASC LIMIT 37300,50;
可以看到在加上提示符后,使用到了我们新加的索引,扫描的行数为54580行,执行时间:
root@127.0.0.1 : test_db 16:10:30 SELECT b.*,a.*,k.* FROM instance b force INDEX (ind_user_id) LEFT OUTER JOIN image a ON b.image_id=a.image_id LEFT OUTER JOIN key_pair k ON b.key_pair_id=k.key_pair_id LEFT OUTER JOIN region_alias r_a ON r_a.region_no=b.region_no WHERE b.STATUS IN (1,8) AND b.user_id = 21 AND r_a.big_region_no='regeion_xx' ORDER BY b.instance_no ASC LIMIT 37300,50; (0.49 sec)
原始的执行时间:
root@127.0.0.1 : test_db 16:10:51: SELECT b.*,a.*,k.* FROM instance b LEFT OUTER JOIN image a ON b.image_id=a.image_id LEFT OUTER JOIN key_pair k ON b.key_pair_id=k.key_pair_id LEFT OUTER JOIN region_alias r_a ON r_a.region_no=b.region_no WHERE b.STATUS IN (1,8) AND b.user_id = 21 AND r_a.big_region_no='regeion_xx' ORDER BY b.instance_no ASC LIMIT 37300,50; (1.28 sec)
总结:
Order by desc/asc limit的优化技术有时候在你无法建立很好索引的时候,往往会得到意想不到的优化效果,但有时候有一定的局限性,优化器可能不会按照你既定的索引路径扫描,优化器需要考虑到查询列的过滤性以及limit的长度,当查询列的选择性非常高的时候,使用sort的成本是不高的,当查询列的选择性很低的时候,那么使用order by +limit的技术是很有效的。
以上就是MySQL索引优化的实际案例分析。世界上在忧虑的永远不会是天才或庸人,而是介于这两者之间的人。这些人有点小聪明,却又不够聪明,不懒,但也不够勤奋。他们就这样在自己的位置来回走动。更多关于MySQL索引优化的实际案例分析请关注haodaima.com其它相关文章!