Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Interview Question: Query - which sentences contain all of the words of a phrase

I have solved the problem but was unable to come up with the most efficient problem that passes all test cases. It times out in 5 test cases.

Determine sentences contain all of the words of a phrase
0: chris and jennifer had a fight this morning
1: chris went on a holiday
2: jennifer is in prison

Query Phrases are
0: chris jennifer
1: jennifer
2: prison

Goal is to find indexes of the matching sentences for each query or -1 if there are no matching sentence exists. Order of words does not matter.

Output :
0
0 2
2

i.e. First query has matching words in sentence 0, second one in sentence 0 and 1. and so on.

Constraints

  • n: number of sentences
  • m: number of prases
  • n, m < 10^4
  • Number of words in any sentence or query phrase is in range [1-10]
  • Each word has at most 11 chars
  • No word appears in more than 10 sentences
  • Each word consists of upper and lower case alphabet only
  • Each word must match exactly - i.e. likes and like do not match.

Input Format:

3
chris and jennifer had a fight this morning
chris went on a holiday
jennifer is in prison
3
chris jennifer
jennifer
prison

each 3 represents number of sentences or queries.


The followings were what I tried...

1. My first solution :

  1. Make HashMap per each sentence
  2. For each splitted word in phrase :
    2-1. check if all words exists in the sentence hashmap
    2-2. If so store the index
    2-3. If there is no matching sentences exist for all sentences, store -1.
  3. Print result

let p = the largest number of words in a sentence
let k = the largest number of words in a query
Big O is O(npk)

public static void textQueries(List<String> sentences, List<String> queries) {
    List<Map<String, Integer>> sentenceMaps = createMaps(sentences);
    String results = queryMatcher(sentenceMaps, queries);
    System.out.println(results);
}


private static String queryMatcher(List<Map<String, Integer>> sentenceMaps, List<String> queries) {
    Map<String, Integer> wordCounter = new LinkedHashMap<>();
    List<List<String>> results = new ArrayList<List<String>>();
    for (String query : queries) {
        List<String> result = new ArrayList<>();
        for (int j = 0; j < sentenceMaps.size(); j++) {
            if (isQueryFound(sentenceMaps.get(j), query, wordCounter)) {
                result.add(j + "");
            }
        }
        results.add(result);
    }
    return generateResultString(results);
}


/*
 * StringBuilder used to reduce delays of calling multiple System.out.println();
 */
private static String generateResultString(List<List<String>> results) {
    StringBuilder stringBuilder = new StringBuilder();
    for (List<String> matchingSentenceIndexes : results) {
        if (matchingSentenceIndexes.isEmpty()) {
            stringBuilder.append("-1\n");
        } else {
            resultStringHelper(matchingSentenceIndexes, stringBuilder);
        }
        //stringBuilder.append("\n");
    }
    return stringBuilder.toString();
}

/*
 * add " " for multiple indexes result
 */
private static void resultStringHelper(List<String> result, StringBuilder stringBuilder) {
    for (int i = 0; i < result.size(); i++) {
        stringBuilder.append(result.get(i));
        if (i < result.size() - 1) {
            stringBuilder.append(" ");
        } else if (i == result.size() - 1) {
            stringBuilder.append("\n");
        }
    }
}
private static boolean isQueryFound(Map<String, Integer> sentenceMap, String query, Map<String, Integer> wordCounter) {
    String[] queryTokens = query.split(" ");
    for (String queryToken : queryTokens) {
        if (isMoreThan10Sentences(wordCounter, queryToken)) return false;
        if (sentenceMap.containsKey(queryToken)) {
            wordCounter.put(queryToken, wordCounter.getOrDefault(queryToken, 0) + 1);
        } else {
            return false;
        }
    }
    return true;
}

private static boolean isMoreThan10Sentences(Map<String, Integer> wordCounter, String token) {
    return wordCounter.getOrDefault(token, -1) > 10;
}

private static Map<String, Integer> initMap(String[] tokens) {
    Map<String, Integer> map = new LinkedHashMap<>();
    for (String token : tokens) {
        map.put(token, 0);
    }
    return map;
}

private static List<Map<String, Integer>> createMaps(List<String> sentences) {
    List<Map<String, Integer>> maps = new ArrayList<Map<String,Integer>>();
    for (int i = 0; i < sentences.size(); i++) {
        String[] tokens = sentences.get(i).split(" ");
        maps.add(initMap(tokens));
    }
    return maps;
}

Timeout in the last 5 test cases.

For small test cases, the benchmark is the following on their online coding server:
Map creation time: 9.23954E-4
Query matching time: 3.85751E-4

Map generation is expensive.


2. My second try:

Similar logic but applied concurrency, as the platform supports up to 2 threads.

Multi-threading is done here :
1. Sentence -> Map generation (Concurrent map generation)
2. Query matching (Concurrent matching)

public static void textQueries(List<String> sentences, List<String> queries) {
    List<Map<String, Integer>> sentenceMaps = createMaps(sentences);
    startTime = System.nanoTime();
    String results = queryMatcher(sentenceMaps, queries);
    System.out.println(results);

private static String queryMatcher(List<Map<String, Integer>> sentenceMaps, List<String> queries) {
    List<Future<String>> futures = new ArrayList<Future<String>>();
    int threads = Runtime.getRuntime().availableProcessors();
    ExecutorService executor = Executors.newFixedThreadPool(threads);
    String[] results = new String[threads];
    int length = queries.size() / threads;
    for (int i = 0; i < threads; i++) {
        int queryStart = length * i;
        int queryEnd = length * (i+1);
        if (i == threads -1 && queries.size() % threads != 0) queryEnd++;
        Callable<String> worker = new QueryMatcher(sentenceMaps, queries, queryStart, queryEnd);
        Future<String> submit = executor.submit(worker);
        futures.add(submit);
    }

    for (int i = 0; i < futures.size(); i++) {
        try {
            results[i] = futures.get(i).get();
        } catch (InterruptedException e) {
            e.printStackTrace();
        } catch (ExecutionException e) {
            e.printStackTrace();
        }
    }
    String returnString = concaString(results);
    executor.shutdown();
    return returnString;
}

private static String concaString(String[] results) {
    StringBuilder stringBuilder = new StringBuilder();
    for (int i = 0; i < results.length; i++) {
        stringBuilder.append(results[i]);
    }
    return stringBuilder.toString();
}

private static String generateResultString(List<List<String>> results) {
    StringBuilder stringBuilder = new StringBuilder();
    for (List<String> matchingSentenceIndexes : results) {
        if (matchingSentenceIndexes.isEmpty()) {
            stringBuilder.append("-1\n");
        } else {
            resultStringHelper(matchingSentenceIndexes, stringBuilder);
        }
        //stringBuilder.append("\n");
    }
    return stringBuilder.toString();
}

private static void resultStringHelper(List<String> result, StringBuilder stringBuilder) {
    for (int i = 0; i < result.size(); i++) {
        stringBuilder.append(result.get(i));
        if (i < result.size() - 1) {
            stringBuilder.append(" ");
        } else if (i == result.size() - 1) {
            stringBuilder.append("\n");
        }
    }
}
private static boolean isQueryFound(Map<String, Integer> sentenceMap, String query, Map<String, Integer> wordCounter) {
    String[] queryTokens = query.split(" ");
    for (String queryToken : queryTokens) {
        if (isMoreThan10Sentences(wordCounter, queryToken)) return false;
        if (sentenceMap.containsKey(queryToken)) {
            wordCounter.put(queryToken, wordCounter.getOrDefault(queryToken, 0) + 1);
        } else {
            return false;
        }
    }
    return true;
}

private static boolean isMoreThan10Sentences(Map<String, Integer> wordCounter, String token) {
    return wordCounter.getOrDefault(token, -1) > 10;
}

private static boolean isQueryFound(Map<String, Integer> sentenceMap, String query) {
    String[] queryTokens = query.split(" ");
    //Map<String, Integer> duplicateChecker = new LinkedHashMap<String, Integer>();

    for (String queryToken : queryTokens) {
        if (sentenceMap.containsKey(queryToken)) {
            //if (!duplicateChecker(duplicateChecker, sentenceMap, queryToken))
            //return false;
        } else {
            return false;
        }
    }
    return true;
}

/*
 * this method checks for the case when there are duplicate words in query
 * i.e. sentence containing 2 hello will return false of queries with 3 hello
 */
private static boolean duplicateChecker(Map<String, Integer> duplicateChecker, Map<String, Integer> sentenceMap, String queryToken) {
    if (duplicateChecker.containsKey(queryToken)) {
        if (duplicateChecker.get(queryToken) == 0) return false;
        duplicateChecker.put(queryToken, duplicateChecker.get(queryToken) - 1);
    } else {
        duplicateChecker.put(queryToken, sentenceMap.get(queryToken) - 1);
    }
    return true;
}

private static List<Map<String, Integer>> createMaps(List<String> sentences) {
    List<Map<String, Integer>> maps = new ArrayList<>();
    int threads = Runtime.getRuntime().availableProcessors();
    ExecutorService executor = Executors.newFixedThreadPool(threads);
    List<Future<List<Map<String, Integer>>>> futures = new ArrayList<Future<List<Map<String, Integer>>>>();
    int length = (sentences.size()) / threads;

    for (int i = 0; i < threads; i++) {
        int start = i * length;
        int end = (i+1) * length;
        if (i == threads - 1 && sentences.size() % threads != 0) end++;
        List<String> splitSentence = new ArrayList(sentences.subList(start, end));

        Callable<List<Map<String, Integer>>> worker = new MapMaker(splitSentence);
        Future<List<Map<String, Integer>>> submit = executor.submit(worker);
        futures.add(submit);
    }

    for (int i = 0; i < futures.size(); i++) {
        try {
            for (Map<String, Integer> map : futures.get(i).get()) {
                maps.add(map);
            }
        } catch (InterruptedException e) {
            e.printStackTrace();
        } catch (ExecutionException e) {
            e.printStackTrace();
        }
    }
    executor.shutdown();
    return maps;
}

private synchronized static Map<String, Integer> initMap(String[] tokens) {
    Map<String, Integer> map = new LinkedHashMap<>();
    for (String token : tokens) {
        map.put(token, 0);
        //            map.put(token, map.getOrDefault(map.get(token), 1) + 1);
    }
    return map;
}


public static class MapMaker implements Callable<List<Map<String, Integer>>> {
    private List<String> sentences;

    @Override
    public List<Map<String, Integer>> call() throws Exception {
        List<Map<String, Integer>> maps = new ArrayList<Map<String,Integer>>();
        for (int i = 0; i < sentences.size(); i++) {
            String[] tokens = sentences.get(i).split(" ");
            maps.add(initMap(tokens));
        }
        return maps;
    }

    public MapMaker(List<String> sentences) {
        this.sentences = sentences;
    }
}

public static class QueryMatcher implements Callable<String> {
    private List<Map<String, Integer>> sentenceMaps;
    private List<String> queries;
    private int queryStart;
    private int queryEnd;

    @Override
    public String call() throws Exception {
        List<List<String>> results = new ArrayList<List<String>>();
        for (int i = queryStart; i < queryEnd; i++) {
            List<String> result = new ArrayList<>();
            String query = queries.get(i);
            for (int j = 0; j < sentenceMaps.size(); j++) {
                if (isQueryFound(sentenceMaps.get(j), query)) {
                    result.add(j + "");
                }
            }
            results.add(result);
        }
        return generateResultString(results);
    }

    public QueryMatcher(List<Map<String, Integer>> sentenceMaps, List<String> queries, int queryStart, int queryEnd) {
        this.sentenceMaps = sentenceMaps;
        this.queries = queries;
        this.queryStart = queryStart;
        this.queryEnd = queryEnd;
    }
}

Although I hoped for some speedup for large test case, it still gave 5 test cases timeout.

And for small test cases, it increased map generation time due to additional overhead on creating pools.

Benchmark time:
Map time: 0.007669489
Query matching time: 3.22923E-4


3. My third solution - Coding the above in C++

I questioned whether it could be Java that gives the timeout.
The platform actually gives shorter computation time for C++, so to my suprise, it still gave same 5 timeouts.


4. My 4th approach Regex,

I knew it would be slower, but I still did in futile attempt. The Big O is actually slower here, as I need to sort each sentences by words to avoid n! permutation of regex...

public static void textQueries(List<String> sentences, List<String> queries) {
    stringSort(sentences);
    stringSort(queries);
    StringBuilder stringBuilder = new StringBuilder();

    boolean isExist = false;
    for (int index = 0; index < queries.size(); index++) {
        String query = queries.get(index);
        isExist = false;
        for (int i = 0; i < sentences.size(); i++) {
            if (Matcher(buildNaturalLanguage(query), sentences.get(i))) {
                stringBuilder.append(i + " ");
                isExist = true;
            }
        }
        if (!isExist) stringBuilder.append("-1");
        if (index != queries.size() - 1) stringBuilder.append("\n");
    }
    System.out.println(stringBuilder.toString());
}

private static void stringSort(List<String> strings) {
    for (int i = 0; i < strings.size(); ++i) {
        String string = strings.get(i);
        String[] stringParts = string.split(" ");
        StringBuilder stringBuilder = new StringBuilder();
        Arrays.sort(stringParts);
        for (int j = 0; j < stringParts.length; j++) {
            stringBuilder.append(stringParts[j] + " ");
        }
        strings.set(i, stringBuilder.toString());  // sure I made it back to string for code cleaness but you can return String[] for efficiency.. But only minor improvement.
    }
}

private static String buildNaturalLanguage(String query) {
    // System.out.println("query " + query);
    String[] stringParts = query.split(" ");
    String regular = "(([a-zA-Z])*(\\s))*";
    for (String word : stringParts) {
        regular += word + "(\\s(([a-zA-Z])*(\\s))*)";
    }
    return regular;
}

private static boolean Matcher(String regular, String sentence) {
    Pattern p = Pattern.compile(regular);
    Matcher m = p.matcher(sentence);
    return m.find();
}

Result : Not only getting timeout, it is somehow causing error (wrong answer) on 2 additional undisclosed test cases.. I have no idea why..

Ω(nm^2 + plogp).. assuming regex matching is O(m)


I can only think of possibility of filtering some query or sentences before even runnning the main algorithm? (constraint : 10 max matching per word).

This constraint checking part is still implemented with my first and second solution however. So smarter filtering might be required.

The thing is I think the BCR - best conceivable rate is O(MNP), you would still need to go through each query and sentences, and also split them if not using regex.

I am totally lost here, how can I actually increase the speed further than this?

Many thanks in advance.

like image 633
zcahfg2 Avatar asked Oct 22 '18 20:10

zcahfg2


2 Answers

Maintain a HashMap that will map Strings to Set<Int>. The idea is to keep track of what sentences a given word appears in. We use a set instead of an array in order to support computing the intersection of two sets efficiently.

For each input sentence:

  • Tokenize it into words, and add the index of the current sentence to the Set corresponding to the current token.

For each query phrase:

  • Tokenize it into words.
  • Query for the Set of indices corresponding to each word
  • Take the intersection of all of these sets.

Time Complexity: Given that there are 10 words in each sentence, the cost of building the HashMap is O(10N log N). The cost of each query is O(10 * log(N)).

like image 113
xrisk Avatar answered Nov 10 '22 01:11

xrisk


I have following idea that may speed up , it seems similar to what Rishav proposed:

public static void main(String[] args) throws FileNotFoundException {

        Scanner sc = new Scanner(new FileInputStream("file.txt"));
        int numberOfSentences = Integer.parseInt(sc.nextLine());

        Set<Integer> sentences = new HashSet<Integer>();
        Map<String, Set<Integer>> words2Sentences = new HashMap<String, Set<Integer>>();
        for (int i = 0; i < numberOfSentences; i++) {
            String words[] = sc.nextLine().split(" ");
            for (int j = 0; j < words.length; j++) {
                if (!words2Sentences.containsKey(words[j])) {
                    words2Sentences.put(words[j], new HashSet<Integer>());
                }
                words2Sentences.get(words[j]).add(i);
            }
            sentences.add(i);
        }

        int numberOfPhrases = Integer.parseInt(sc.nextLine());
        List<Set<Integer>> phraseResults = new ArrayList<Set<Integer>>();
        for (int i = 0; i < numberOfPhrases; i++) {
            Set<String> phrases = new HashSet<String>(Arrays.asList(sc.nextLine().split(" ")));
            Set<Integer> result = new HashSet(sentences);
            for (String s : phrases) {
                result.retainAll(words2Sentences.get(s));
            }
            phraseResults.add(result);
        }

        for (Set<Integer> set : phraseResults) {
            for (Integer i : set) {
                System.out.print(i);
            }
            System.out.println();
        }
    }
like image 45
Bartek Avatar answered Nov 10 '22 01:11

Bartek