The data that is in data.csv file is:
07:36:00 PM 172.20.16.107 104.70.250.141 80 57188 0.48
07:33:00 PM 172.20.16.105 104.70.250.141 80 57188 0.66
07:34:00 PM 172.20.16.105 104.70.250.141 80 57188 0.47
07:35:00 PM 172.20.16.105 104.70.250.141 80 57188 0.48
07:44:00 PM 172.20.16.106 104.70.250.141 80 57188 0.49
07:45:00 PM 172.20.16.106 104.70.250.141 80 57188 0.48
07:46:00 PM 172.20.16.106 104.70.250.141 80 57188 0.33
07:47:00 PM 172.20.16.106 104.70.250.141 80 57188 0.48
07:48:00 PM 172.20.16.106 104.70.250.141 80 57188 0.48
07:36:00 PM 172.20.16.105 104.70.250.141 80 57188 0.48
07:37:00 PM 172.20.16.107 104.70.250.141 80 57188 0.48
07:37:00 PM 172.20.16.105 104.70.250.141 80 57188 0.66
07:38:00 PM 172.20.16.105 104.70.250.141 80 57188 0.47
07:39:00 PM 172.20.16.105 104.70.250.141 80 57188 0.48
07:50:00 PM 172.20.16.106 104.70.250.141 80 57188 0.49
07:51:00 PM 172.20.16.106 104.70.250.141 80 57188 0.48
07:52:00 PM 172.20.16.106 104.70.250.141 80 57188 0.33
07:53:00 PM 172.20.16.106 104.70.250.141 80 57188 0.48
07:54:00 PM 172.20.16.106 104.70.250.141 80 57188 0.48
07:40:00 PM 172.20.16.105 104.70.250.141 80 57188 0.48
This my Code:
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
object ScalaApp {
def main(args: Array[String]) {
val sc = new SparkContext("local[4]", "Program")
// we take the raw data in CSV format and convert it into a
val data = sc.textFile("data.csv")
.map(line => line.split(","))
.map(GroupRecord => (GroupRecord(0),
GroupRecord(1),GroupRecord(2),GroupRecord(3),GroupRecord(4),GroupRecord(5)))
val numPurchases = data.count()
val d1=data.groupByKey(GroupRecord(2)) // here is the error
println("No: " + numPurchases)
println("Grouped Data" + d1)
}
}
I just want the same data that is group by source-IP (2nd column) and order by time (1st column ). So my require data is :
07:33:00 PM 172.20.16.105 104.70.250.141 80 57188 0.66
07:34:00 PM 172.20.16.105 104.70.250.141 80 57188 0.47
07:35:00 PM 172.20.16.105 104.70.250.141 80 57188 0.48
07:37:00 PM 172.20.16.105 104.70.250.141 80 57188 0.66
07:38:00 PM 172.20.16.105 104.70.250.141 80 57188 0.47
07:39:00 PM 172.20.16.105 104.70.250.141 80 57188 0.48
07:40:00 PM 172.20.16.105 104.70.250.141 80 57188 0.48
07:44:00 PM 172.20.16.106 104.70.250.141 80 57188 0.49
07:45:00 PM 172.20.16.106 104.70.250.141 80 57188 0.48
07:46:00 PM 172.20.16.106 104.70.250.141 80 57188 0.33
07:47:00 PM 172.20.16.106 104.70.250.141 80 57188 0.48
07:50:00 PM 172.20.16.106 104.70.250.141 80 57188 0.49
07:51:00 PM 172.20.16.106 104.70.250.141 80 57188 0.48
07:52:00 PM 172.20.16.106 104.70.250.141 80 57188 0.33
07:53:00 PM 172.20.16.106 104.70.250.141 80 57188 0.48
07:54:00 PM 172.20.16.106 104.70.250.141 80 57188 0.48
07:36:00 PM 172.20.16.107 104.70.250.141 80 57188 0.48
07:37:00 PM 172.20.16.107 104.70.250.141 80 57188 0.48
but i have problem with my code so plz help me !
Your problem is that your second map creates a Tuple6 instead of a key-value pair, which is what is required if you want to do a xxxByKey operation. If you want to group by the 2nd column, you should make GroupRecord(1) your key and the rest values, and then call groupByKey, like this:
data
.map(GroupRecord => (GroupRecord(1),(GroupRecord(0),GroupRecord(2),GroupRecord(3),GroupRecord(4),GroupRecord(5)))
.groupByKey()
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With