Saturday, November 16, 2019
Determining Attributes to Maximize Visibility of Objects
Determining Attributes to Maximize Visibility of Objects A Critique onà Determining Attributes toà Maximize Visibility of Objects Muhammed Miah Gautam Das Vagelis Hristidis Heikki Mannila Vidisha H. Shah 1. Summary of the published work Das, Hristidis, and Mannila (2009, p. 959) discussed about the ranking function and top k retrieval algorithms that helps user (potential buyers to search for the required product from the available catalog. The problem is how a user (potential seller) should select attributes of new tuple that the product stands out from the other available products. So there are several formulation that were developed by the author and few that are already in practice. According to authors, to run a query(search) keywords are entered on basis of which search is conducted (p. 959). The query anserwing system may return all the values that fulfill the condition it is also called as unranked retrieval or Boolean retrieval, or can rank the answers and return top k values known as ranked retrieval or Top-k retrieval. The example given by the author is objects can be ranked on the attribute based on price or based on relevance. The example and a problem related to it is described by the author. A user wants to classify an ad to rent an apartment in an online newspaper (p. 959). The given ad (tuple) has various attributes like number of bedrooms, location and so on. The cost factor is also involved in any ad so accordingly attribute that will provide better visibility should be selected. To understand which attributes provide better visibility we can built it on basis of previous sellers recommendation (tradition technique) or an argument by which we can view the ranking function buy which we can understand which attribute will lead to high ranking score. Example adding an attribute swimming pool can increase the visibility, or a catchy title or indexing keywords (for an article). Let D be the database of some product that has been advertised already (competitor). Author is considering that the database can be a relational table or a text document (p. 960). If database is a relational table then each tuple i n the table is a product and every column is an attribute related to the product. If database is a collection of text document then each document contains data regarding a specific product (ad). The set of queries or search conditions that have been executed in past by the user is stated as Q. therefore Q is the ââ¬Å"query logâ⬠or ââ¬Å"workloadâ⬠. The query log is the record of the queries that have been used buy the potential buyers in the past. So the query could be like SQL query or query based on key word that will return a tuple from D(database). The problem given by the author is when a D(database), Q(query log), new tuple t and integer m are given determine best m attributes for tuple t such that when the shortened version of the tuple t with m attributes is inserted in d then the number of queries from Q retrieving tuple t is maximized (p. 960). In this paper variant of m is also considered that is when m is given by the user or when m is not mentioned. In this paper author has consider several variants like Boolean (unranked retrieval) (P 960), categorical variant, text and numeric data variant and conjunctive and disjunctive query semantics. Budget variant is also considered where in if m is not given the objective of maximizing the visibility is achieved keeping m minimum. No- budget variant is also considered where value of m is not given and the only aim is to gain maximum visibility of the object and for that all possible attributes can be added. In the preliminaries section author describes that for the given database D it contains tuples {t1, t2,â⬠¦.tm}. Each tuple t has various attributes { a1, a2,â⬠¦. an}. Tuple t will have either value 1 or 0. 0 implies that the attribute is absent and 1 implies that the feature is available. Tuple Domination means that if a tuple has all attributes value 1 that that tuple dominates. Tuple compression of t which has m attributes. It retains all 1ââ¬â¢s in m and converts rest all attributes to 0 (p. 961). In Conjunctive Boolean with query log(CB-QL) variant the problem definition stated by the author is when a Q with Conjunctive Boolean retrieval semantics, tuple t, and integer m are given then have to compute compressed tuple with m attribute with maximum visibility(p. 961). For this problem author uses NP-Completeness Results and derives the Theorem that the decision version of CB-QL problem is NP-hard. Author explains various algorithms for Conjunctive Boolean with query log (p. 961). First is Optimal Brute Force Algorithm. As stated earlier that CB-QL is NP-hard so during worst case optional algorithm will run in polynomial time. The problem can be solved by a simple. This problem can be solved by simple brute force algorithm. So can be called as Brute Force-CB-QL which will consider all the combination of all m attributes of the tuple t such that the combination will satisfy to achieve maximum visibility among Q. In Optimal Algorithm Based on Integer Linear programming an ILP framework CB-QL can be described as follows, new tuple t be a Boolean vector has various attributes {a1,a2,â⬠¦an}. Q be the query log and S be the total number of queries in query log. So the task is to This integer linear formulation is attractive unlike other general IP solvers, ILP solvers and are also usually more efficient(p. 962). According to author in Optimal Algorithm that is Based on Maximal Frequent Item Sets according to the author this algorithm is based on Integer Linear Programming, but this has certain limitation so author says it is impractical if there are more than few hundred of queries in the Q query log. The author has develop an alternate approach for the same which scales large query logs very well (p. 963). This algorithm is called MaxFreqItemSets-CB-QL, for this author has defined the frequent item set problem, Complementing the Query Log, Setting of the Threshold Parameter, Random Walk to Compute Maximal Frequent Item Sets, Complexity Analysis of a Random Walk Sequence, Number of Iterations, Frequent Item Sets at Level M _ m, Preprocessing Opportunities, The Per-Attribute Variant. Author says in Greedy Heuristics algorithm becomes slow for large query logs when maximal frequent item set based algorithm has better scalability then the IPL based algorithm (p. 964). So author has developed suboptimal greedy heuristic for solving CB-QL. The algorithm consist of ConsumeAttr-CB-QL computes the number of times each attribute appears in Q. Using this top m attributes that have highest frequency is computed. The algorithm ConsumeAttrCumul-CB-QL first selects the attributes from the query log Q that has occurred maximum times and then finds the attribute that occurs second highest in the Q, and so on. The algorithm ConsumeQueries-CB-QL picks the query with minimum number of attributes first, and then selects all attributes specified in the query. In next section author explains problem variant for text data. In the text database there is a collection of documents, and each document consist a data of a particular ad (p. 965). The problem definition for text data is that query is a set of keywords and have to retrieve top-k documents via query specific scoring functions and make the document maximum visible. According to author text database can be directly mapped into Boolean database (p. 965). So the algorithm and the working can be made similar to that of Boolean Data but author says that there is a problem with attribute selection for text data is NP-complete. It can convert it into Boolean considering each key word as a Boolean attribute. So according to author since text database can be converted to Boolean database in the algorithm for text data the are two issues firstly to view each text keyword as a Boolean attribute in query log Q, and none of the optimal algorithms are feasible for text data (p. 965) . Second issue is that in text data the scoring functions that are used takes account of the document length and leads to decrease the score if keyword has low frequency. In the next section author has described about the experiments that were conducted and there results. For this experiments system that was used had following configuration P4, 1 GB RAM, 3.2- GHZ processor, 100 GB HDD, Microsoft SQL Server 2,000 RDBMS. Algorithms were implemented in C# Language, for backend RDBMS and connectivity was done using ADO. 2 data sets were used for Boolean data and publication titles were used for text data experiments. 185 queries in query log were created for the experiments, 205 distinct keywords were created by other students. The experiment worked well for Boolean data CB-QL where top m attributes were given and had maximum visibility for 185 queries. Individual experiments were done to calculate the execution time of each algorithms of CB-QL. Various statistical data is given by the author that gives how individual algorithm runs under various workload. Various similar experiments were done for text data also and its algorithm and similar statistical d ata is given by the author (p. 965). In the next section various other problem variants for Boolean data, categorical and numeric data are considered. In that author has first explain Conjunctive Boolean-Data (CB-D) in which author describes its problem definition for maximum visibility given D(database), Q(query log), t (new tuple) and m(integer). For the given problem definition complexity results for CB-D and its algorithm are given by the author (p. 967). Then next variant considered is Top-k ââ¬â Global Ranking (Tk-GR) and Top-k ââ¬â Query-Specific Ranking (Tk-QR) and in that author considers Top-k retrieval using Global and Query-Specific Scoring Function. Then problem definition for Tk-GR and Tk-QR is stated by the author and then its complexity and algorithm for the same are given(P.968). Next variant considered by the author is Skyline Boolean (SB) where skyline retrieval semantics are considered then problem definition for SB then its complexity and algorithms are discussed. In the similar way remainin g variants Conjunctive Booleanââ¬âQuery Logââ¬âNegation (CB-QL-Negation), Maximize Query Coverage (MQC), Categorical and Numeric Data are discussed by the author(P. 969). In conclusion author describes that how the best attributes for the problem can be selected from the data set given query log. Author has presented variants for many cases like Boolean data or categorical or text data and numeric data (p. 972). And has showed that even though the problem is NP complete the optimal algorithms are feasible for small inputs. Author has also presented greedy algorithms that can produce good approximation ratio. 2. My Opinion on published work The use of internet and network has increased tremendously and with that the data available on network has increased but the main problem is information to knowledge conversion that is finding data that is useful to the user, over spam. The algorithm discussed by the author can be used to improve the visibility of the document. In the paper author has not just given algorithm for Boolean type data but also text data and other variant that is the algorithm can be used for real time data that is in various forms. The main focus of the author is on potential seller and what all attributes should be added to maximize the visibility of the advisement or the document on the web so that the potential buyers can view that document in first few options, but this can be used other way round to and using this spam can be created, a document that is a fake document that has various attributes which are not true but are added added to gain maximum visibility, which should not be even displayed in the given category. The author makes assumption about the competitors or say other advertise, and assumptions about the users preferences are made as well. The queries in the query log where written by random students and not according to what actual users want, so there is no guarantee that this will work equally well in real time environment and will actually maximize the visibility with real time users and on real network. As given by the author in every problem definition of every variant that given D database and given Q query log but in real time for many application neither D(database) nor Q(query log) is available for analysis so user have to make assumptions about the competitors and users (potential buyers) need and there after have to decide the Top-k attributes from the subset of all the attributes that will help the user to achieve maximum visibility with minimum number of attributes. In the paper the author has given various variant by which the visibility of the object can be maximized in various cases and has various optimal algorithms and greedy algorithm. Optimal algorithm gives optimal outputs but works well for small inputs only as and when the size of input increases the algorithm does not work well. Greedy algorithm produces approximate results that can be seen from the experiments done by the author with various variants. According to Ao-Jan Su, Y. Charlie Hu, Aleksandar Kuzmanovic, and Cheng-Kok Koh Page rank of any document or advertisement is not only depended on the attributes but also on key words in host name, the key words in the URL, HTML header so with the selection of proper attributes in the document user also needs to keep a check on above mentioned factors also to maximize the visibility of the object.(2010, P. 55) Angelica Caro has given a table of Data quality and visibility rankings for Spanish university portals. In which author has given DQ* ranking, Visibility ranking, Partial visibility rankings in terms of Site, Links and Popularity, Distance* where *DQ means data quality and *Distance between the data quality and visibility rankings. Teal numbers indicate the portals that are relatively close in both rankings. So from the result given by the author it is seen that there is not a precise order that is the data quality of a site can be ranked 1 but visibility is 19 because it is based on other factors also like its popularity, links, sites and distance. So even if the DQ is not very good but it is popular or it has many incoming links can lead to improve the overall ranking of the page and thereby maximizes the visibility of the page. The statistic of the site that has ranks first in visibility is data quality is 5 visibility is 1 site 1 links 1 popularity 3 distance 4 so it can be seen that to gain maximum visibility we cannot just depend on attributes of the data that is not just data quality but there are various other factors that is required to be considered to improve visibility of the object, that is not considered in the paper by the author.(2011, p. 46). References Ao-Jan Su, Hu, Y.C., Kuzmanovic, A., Cheng-Kok Koh (2010). How to Improve Your Google Ranking: Myths and Reality.2010 IEEE/WIC/ACM International Conference onWeb Intelligence and Intelligent Agent Technology (WI-IAT),1, 50-57.doi: 10.1109/WI-IAT.2010.195 Caro, A., Calero, C., Moraga, M.A.(2011). Are Web Visibility and Data Quality Related Concepts?.Internet Computing, IEEE, 15(2), 43-49.doi: 10.1109/MIC.2010.126 Miah, M., Das, G., Hristidis, V., Mannila, H. (2009). Determining Attributes to Maximize Visibility of Objects.Knowledge and Data Engineering, IEEE Transactions on,21(7), 959-973.doi: 10.1109/TKDE.2009.72
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.